WorldWideScience

Sample records for human interpretative coding

  1. Interpretation and code generation based on intermediate languages

    DEFF Research Database (Denmark)

    Kornerup, Peter; Kristensen, Bent Bruun; Madsen, Ole Lehrmann

    1980-01-01

    The possibility of supporting high level languages through intermediate languages to be used for direct interpretation and as intermediate forms in compilers is investigated. An accomplished project in the construction of an interpreter and a code generator using one common intermediate form...

  2. Faith and Ethics, Covenant and Code: The 2015 Revision of the ANA Code of Ethics for Nurses with Interpretive Statements.

    Science.gov (United States)

    Fowler, Marsha D

    How does and should the American Nurses Association Code of Ethics for Nurses with Interpretive Statements, with foundations from the late 1800s, impact today's nursing practice? How can the Code help you? The earlier 2001 Code was revised and became effective January 2015. The nine provisions received modest revision, as did the corresponding interpretive statements. However, Provisions 8 and 9 and their interpretive statements received more substantial revision. This article explains the Code and summarizes the 2015 revisions, considering points of particular interest for nurses of faith.

  3. Simulation and interpretation codes for the JET ECE diagnostic. Part 1: physics of the codes' operation

    International Nuclear Information System (INIS)

    Bartlett, D.V.

    1983-06-01

    The codes which have been developed for the analysis of electron cyclotron emission measurements in JET are described. Their principal function is to interpret the spectra measured by the diagnostic so as to give the spatial distribution of the electron temperature in the poloidal cross-section. Various systematic effects in the data are corrected using look-up tables generated by an elaborate simulation code. The part of this code responsible for the accurate calculation of single-pass emission and refraction has been written at CNR-Milan and is described in a separate report. The present report is divided into two parts. This first part describes the methods used for the simulation and interpretation of spectra, the physical/mathematical basis of the codes written at CEA-Fontenay and presents some illustrative results

  4. Ground Operations Aerospace Language (GOAL). Volume 4: Interpretive code translator

    Science.gov (United States)

    1973-01-01

    This specification identifies and describes the principal functions and elements of the Interpretive Code Translator which has been developed for use with the GOAL Compiler. This translator enables the user to convert a compliled GOAL program to a highly general binary format which is designed to enable interpretive execution. The translator program provides user controls which are designed to enable the selection of various output types and formats. These controls provide a means for accommodating many of the implementation options which are discussed in the Interpretive Code Guideline document. The technical design approach is given. The relationship between the translator and the GOAL compiler is explained and the principal functions performed by the Translator are described. Specific constraints regarding the use of the Translator are discussed. The control options are described. These options enable the user to select outputs to be generated by the translator and to control vrious aspects of the translation processing.

  5. Development of Coolant Radioactivity Interpretation Code

    International Nuclear Information System (INIS)

    Kim, Kiyoung; Jung, Youngsuk; Kim, Kyounghyun; Kim, Jangwook

    2013-01-01

    In Korea, the coolant radioactivity analysis has been performed by using the computer codes of foreign companies such as CADE (Westinghouse), IODYNE and CESIUM (ABB-CE). However, these computer codes are too conservative and have involved considerable errors. Furthermore, since these codes are DOS-based program, their easy operability is not satisfactory. Therefore it is required development of an enhanced analysis algorithm applying an analytical method reflecting the change of operational environments of domestic nuclear power plants and a fuel failure evaluation software considering user' conveniences. We have developed a nuclear fuel failure evaluation code able to estimate the number of failed fuel rods and the burn-up of failed fuels during nuclear power plant operation cycle. A Coolant Radio-activity Interpretation Code (CRIC) for LWR has been developed as the output of the project 'Development of Fuel Reliability Enhanced Technique' organized by Korea Institute of Energy Technology Evaluation and Planning (KETEP). The CRIC is Windows based-software able to evaluate the number of failed fuel rods and the burn-up of failed fuel region by analyzing coolant radioactivity of LWR in operation. The CRIC is based on the model of fission products release commonly known as 'three region model' (pellet region, gap region, and coolant region), and we are verifying the CRIC results based on the cases of domestic fuel failures. CRIC users are able to estimate the number of failed fuel rods, burn-up and regions of failed fuel considered enrichment and power distribution of fuel region by using operational cycle data, coolant activity data, fuel loading pattern, Cs-134/Cs-137 ratio according to burn-up and U-235 enrichment provided in the code. Due to development of the CRIC, it is secured own unique fuel failure evaluation code. And, it is expected to have the following significant meaning. This is that the code reflecting a proprietary technique for quantitatively

  6. Phase-amplitude coupling supports phase coding in human ECoG

    Science.gov (United States)

    Watrous, Andrew J; Deuker, Lorena; Fell, Juergen; Axmacher, Nikolai

    2015-01-01

    Prior studies have shown that high-frequency activity (HFA) is modulated by the phase of low-frequency activity. This phenomenon of phase-amplitude coupling (PAC) is often interpreted as reflecting phase coding of neural representations, although evidence for this link is still lacking in humans. Here, we show that PAC indeed supports phase-dependent stimulus representations for categories. Six patients with medication-resistant epilepsy viewed images of faces, tools, houses, and scenes during simultaneous acquisition of intracranial recordings. Analyzing 167 electrodes, we observed PAC at 43% of electrodes. Further inspection of PAC revealed that category specific HFA modulations occurred at different phases and frequencies of the underlying low-frequency rhythm, permitting decoding of categorical information using the phase at which HFA events occurred. These results provide evidence for categorical phase-coded neural representations and are the first to show that PAC coincides with phase-dependent coding in the human brain. DOI: http://dx.doi.org/10.7554/eLife.07886.001 PMID:26308582

  7. References to Human Rights in Codes of Ethics for Psychologists: Critical Issues and Recommendations. Part 1

    Directory of Open Access Journals (Sweden)

    Жанель Готье

    2018-12-01

    Full Text Available There are codes of ethics in psychology that explicitly refer to human rights. There are also psychologists interested in the protection and promotion of human rights who are calling for the explicit inclusion of references to human rights in all psychology ethics codes. Yet, references to human rights in ethics documents have rarely been the focus of attention in psychological ethics. This article represents the first part of a two-part article series focusing on critical issues associated with the inclusion of references to human rights in the ethical codes of psychologists, and recommendations about how psychological ethics and the human rights movement can work together in serving humanity. The first part of the article series examines issues pertaining to the interpretation of references to human rights in codes of ethics for psychologists, and the justifications for including these references in psychological ethics codes. The second part of the article series examines how the Universal Declaration of Ethical Principles for Psychologists can be used to extend or supplement codes of ethics in psychology, how ethical principles and human rights differ and complement each other, and how psychological ethics and the human rights movement can work together in serving humanity and improving the welfare of both persons and peoples.

  8. Orion: Detecting regions of the human non-coding genome that are intolerant to variation using population genetics.

    Science.gov (United States)

    Gussow, Ayal B; Copeland, Brett R; Dhindsa, Ryan S; Wang, Quanli; Petrovski, Slavé; Majoros, William H; Allen, Andrew S; Goldstein, David B

    2017-01-01

    There is broad agreement that genetic mutations occurring outside of the protein-coding regions play a key role in human disease. Despite this consensus, we are not yet capable of discerning which portions of non-coding sequence are important in the context of human disease. Here, we present Orion, an approach that detects regions of the non-coding genome that are depleted of variation, suggesting that the regions are intolerant of mutations and subject to purifying selection in the human lineage. We show that Orion is highly correlated with known intolerant regions as well as regions that harbor putatively pathogenic variation. This approach provides a mechanism to identify pathogenic variation in the human non-coding genome and will have immediate utility in the diagnostic interpretation of patient genomes and in large case control studies using whole-genome sequences.

  9. Developments in the Generation and Interpretation of Wire Codes (invited paper)

    International Nuclear Information System (INIS)

    Ebi, K.L.

    1999-01-01

    Three new developments in the generation and interpretation of wire codes are discussed. First, a method was developed to computer generate wire codes using data gathered from a utility database of the local distribution system and from tax assessor records. This method was used to wire code more than 250,000 residences in the greater Denver metropolitan area. There was an approximate 75% agreement with field wire coding. Other research in Denver suggests that wire codes predict some characteristics of a residence and its neighbourhood, including age, assessed value, street layout and traffic density. A third new development is the case-specular method to study the association between wire codes and childhood cancers. Recent results from applying the method to the Savitz et al and London et al studies suggest that the associations between childhood cancer and VHCC residences were strongest for residences with a backyard rather than street service drop, and for VHCC residences with LCC speculars. (author)

  10. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  11. Description of computer code PRINS, Program for Interpreting Gamma Spectra, developed at ENEA

    Energy Technology Data Exchange (ETDEWEB)

    Borsari, R. [ENEA, Centro Ricerche `E. Clementel`, Bologna (Italy). Dip. Energia

    1995-11-01

    The computer code PRINS, program for interpreting gamma Spectra, has been developed in collaboration with CENG/SECC (Centre Etude Nucleaire Grenoble / Service Etude Comportement du Combustible). Later it has been updated and improved at ENEA. Properties of the PRINS code are: (1) A powerful algorithm to locate the peaks; (2) An accurate evaluation of the errors; (3) Possibility of an automatic channels-energy calibration.

  12. Description of computer code PRINS, Program for Interpreting Gamma Spectra, developed at ENEA

    International Nuclear Information System (INIS)

    Borsari, R.

    1995-12-01

    The computer code PRINS, PRogram for INterpreting gamma Spectra, has been developed in collaboration with CENG/SECC (Centre Etude Nucleaire Grenoble / Service Etude Comportement du Combustible). Later it has been updated and improved at ENEA. Properties of the PRINS code are: I) A powerful algorithm to locate the peaks; 2) An accurate evaluation of the errors; 3) Possibility of an automatic channels-energy calibration

  13. Human trafficking for labour exploitation: Interpreting the crime

    NARCIS (Netherlands)

    Coster van Voorhout, J.E.B.

    2007-01-01

    The definition of human trafficking for labour exploitation, as follows from the European Council Framework Decision, proves to be unclear. Literal interpretation does not suffice, because it does not clarify all elements of what is deemed to be criminal behaviour, and hermeneutical interpretation

  14. Interpretive analysis of the textual codes in the Parrot and Merchant story

    Directory of Open Access Journals (Sweden)

    Zohreh Najafi

    2016-06-01

    Full Text Available Abstract Narratology is a brunch of semiology which considers any kind of narrative, literary or non literary, verbal or visual, story or non story and after this specifies the plot. One of the most important subjects in narratology is the interpretation of textual cods which based on can be understood a lot of hidden meanings. Molavi expresses a lot of mystic points in Mathnavi in narrative form which some of his ideas can be realized in the first study but these ideas aren’t all of subjects which he wants to say and a complex of different meaning is hidden in any narrative which can be revealed by consideration of cods.     In terminology of semiology, code is a special situation in historical process of all of indexes and signs which specified for synchronized analyze. In review of literary texts textual cods are more important. Textual cods are the cods which their ambit is more extended than few special text and links these texts to each other in an interpretational form. Aesthetic cods are a group of textual cods which are used in different arts like: poem, painting, theater, music and so on.       The style of expression of aesthetic cods is the same style of art and literature. In the review of Mathnavi's narratives paying attention to narrative cods and using them can be considered as paralinguistic signs. Narrative cods contain interpretational form which used by authors and commentators of texts. In the story of parrot and merchant the textual cods are as follows:   - Parrot: in this story parrot is a symbolic code which shows all of human soul's features.   - Merchant: in this story merchant act as a cultural code which indicates rich who always are solicitous about their finance and are unaware of spiritual world.    - India: in this story India as a signifier code indicates spiritual world. Hermeneutic cods   Molana uses codes which act as turning point and addressed can understand hidden meanings which haven

  15. [Criminal code and assisted human reproduction].

    Science.gov (United States)

    Cortés Bechiarelli, Emilio

    2009-01-01

    The Spanish Criminal Code punishes in the article 161 the crime of assisted reproduction of the woman without her assent as a form of crime relative to the genetic manipulation. The crime protects a specific area of the freedom of decision of the woman, which is the one that she has dealing with the right to the procreation at the moment of being fertilized. The sentence would include the damages to the health provoked by the birth or the abortion. The crime is a common one--everyone can commit it--and it is not required a result of pregnancy, but it is consumed by the mere intervention on the body of the woman, and its interpretation is contained on the Law 14/2006, of may 26, on technologies of human assisted reproduction. The aim of the work is to propose to consider valid the assent given by the sixteen-year-old women (and older) in coherence with the Project of Law about sexual and reproductive health and voluntary interruption of the pregnancy that is studied at this moment, in Spain, in order to harmonize the legal systems.

  16. The Interpretation Of Speech Code In A Communication Ethnographic Context For Outsider Students Of Graduate Communication Science Universitas Sumatera Utara In Medan

    Directory of Open Access Journals (Sweden)

    Fauzi Eka Putra

    2017-06-01

    Full Text Available Interpreting the typical Medan speech code is something unique and distinctive which could create confusion for the outsider students because of the speech code similarities and differences in Medan. Therefore the graduate students of communication science Universitas Sumatera Utara whose originated from outside of North Sumatera needs to learn comprehend and aware in order to perform effective communication. The purpose of this research is to discover how the interpretation of speech code for the graduate students of communication science Universitas Sumatera Utara whose originated from outside of North Sumatera in adapting themselves in Medan. This research uses qualitative method with the study of ethnography and acculturation communication. The subject of this research is the graduate students of communication science Universitas Sumatera Utara whose originated from outside of North Sumatera in adapting themselves in Medan. Data were collected through interviews observation and documentation. The conclusion of this research shows that speech code interpretation by students from outside of North Sumatera in adapting themselves in Medan leads to an acculturation process of assimilation and integration.

  17. A scored human protein-protein interaction network to catalyze genomic interpretation

    DEFF Research Database (Denmark)

    Li, Taibo; Wernersson, Rasmus; Hansen, Rasmus B

    2017-01-01

    Genome-scale human protein-protein interaction networks are critical to understanding cell biology and interpreting genomic data, but challenging to produce experimentally. Through data integration and quality control, we provide a scored human protein-protein interaction network (InWeb_InBioMap,......Genome-scale human protein-protein interaction networks are critical to understanding cell biology and interpreting genomic data, but challenging to produce experimentally. Through data integration and quality control, we provide a scored human protein-protein interaction network (In...

  18. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  19. VOLTAIRE’S PHILOSOPHY: HUMAN NATURE AND INTERPRETATION OF RELIGION

    Directory of Open Access Journals (Sweden)

    Yu. Zimaryova

    2013-12-01

    Full Text Available The purpose of the article is to determine and reconsider Voltaire’s ideas concerning religion and human nature. In order to achieve this purpose it is necessary to complete the following tasks: to analyse academic literature on Voltaire’s interpretation of the phenomenon of religion; to expose Voltaire’s basic ideas about human nature; to substantiate the importance of anthropological approach to the phenomenon of religion with the ideas of Voltaire’s philosophical works. Methodology. The achievements of anthropocentric philosophical thought of the XIX century possess great potential in the process of constructive comprehension and theoretical reconstruction of the anthropological intention that accompanies the process of philosophising. The research extensively applies hermeneutical method for interpreting Voltaire’s philosophy. Scientific novelty. In academic literature on Voltaire’s works we have ascertained the basic anthropological component of his philosophy and reconsider Voltaire’s ideas about religion as something that is rooted in human nature. Conclusions. In academic literature the interpretation of the phenomenon of religion in Voltaire’s heritage is a rather controversial one. At the one hand, Voltaire criticizes religion for its superstitions and fanaticism. On the other hand, he recognises the existence of God. In our opinion, the phenomenon of religion should be examined in the context of human nature and basic problems related to it such as the problem of soul and the problem of free will. The anthropological approach to the phenomenon of religion allows to avoid the extremity of atheistic and metaphysical approaches and to enable its anthropological interpretation.

  20. SHEAN (Simplified Human Error Analysis code) and automated THERP

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1993-01-01

    One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN

  1. Interpretation of Ersec tests on the backup cooling of pressurized water reactors, by using the FLIRA code

    International Nuclear Information System (INIS)

    Reviglio, Christiane

    1977-01-01

    This research thesis addresses the study of the most severe accident, or reference accident, which might occur in nuclear reactors, a clean break of a cold branch of the primary circuit, which may put the integrity of barriers against radioactive products dispersion outside of the reactor into question again. More particularly, the thesis addresses the study of the backup cooling system, and the fact that fluid flow during re-flooding must be predicted, and that heat exchange coefficients must be known in order to assess the evolution of sheath temperatures. The research comprised an experimental part which aimed at reproducing as faithfully as possible the re-flooding sequence on a tube with internal flow or on a cluster for a better core simulation. These are the ERSEC tests which are to be interpreted. It also comprised a theoretical part based on the use of computational codes which simulate the different phases of the accident and of backup fluid injection. These codes are based on physical models which describe two-phase flows and heat exchanges, and are adjusted to experimental results. The FLIRA code is used which simulates the re-flooding of a reactor duct, and determines the evolution of different values (pressure, temperatures, flow rate, and so on) during the re-flooding process. Thus, the author presents the reference accident, reports studies performed in the USA and in France (ERSEC tests), indicates the various flow regimes and describes heat exchange mechanisms during re-flooding, presents ERSEC test results, presents the FLIRA code, reports the elaboration of governing equations, indicates the various models introduced in the FLIRA code, and describes the numerical processing of equations. He finally gives a first interpretation of ERSEC tests based on the use of the FLIRA code

  2. De novo origin of human protein-coding genes.

    Directory of Open Access Journals (Sweden)

    Dong-Dong Wu

    2011-11-01

    Full Text Available The de novo origin of a new protein-coding gene from non-coding DNA is considered to be a very rare occurrence in genomes. Here we identify 60 new protein-coding genes that originated de novo on the human lineage since divergence from the chimpanzee. The functionality of these genes is supported by both transcriptional and proteomic evidence. RNA-seq data indicate that these genes have their highest expression levels in the cerebral cortex and testes, which might suggest that these genes contribute to phenotypic traits that are unique to humans, such as improved cognitive ability. Our results are inconsistent with the traditional view that the de novo origin of new genes is very rare, thus there should be greater appreciation of the importance of the de novo origination of genes.

  3. De Novo Origin of Human Protein-Coding Genes

    Science.gov (United States)

    Wu, Dong-Dong; Irwin, David M.; Zhang, Ya-Ping

    2011-01-01

    The de novo origin of a new protein-coding gene from non-coding DNA is considered to be a very rare occurrence in genomes. Here we identify 60 new protein-coding genes that originated de novo on the human lineage since divergence from the chimpanzee. The functionality of these genes is supported by both transcriptional and proteomic evidence. RNA–seq data indicate that these genes have their highest expression levels in the cerebral cortex and testes, which might suggest that these genes contribute to phenotypic traits that are unique to humans, such as improved cognitive ability. Our results are inconsistent with the traditional view that the de novo origin of new genes is very rare, thus there should be greater appreciation of the importance of the de novo origination of genes. PMID:22102831

  4. A human-specific de novo protein-coding gene associated with human brain functions.

    Directory of Open Access Journals (Sweden)

    Chuan-Yun Li

    2010-03-01

    Full Text Available To understand whether any human-specific new genes may be associated with human brain functions, we computationally screened the genetic vulnerable factors identified through Genome-Wide Association Studies and linkage analyses of nicotine addiction and found one human-specific de novo protein-coding gene, FLJ33706 (alternative gene symbol C20orf203. Cross-species analysis revealed interesting evolutionary paths of how this gene had originated from noncoding DNA sequences: insertion of repeat elements especially Alu contributed to the formation of the first coding exon and six standard splice junctions on the branch leading to humans and chimpanzees, and two subsequent substitutions in the human lineage escaped two stop codons and created an open reading frame of 194 amino acids. We experimentally verified FLJ33706's mRNA and protein expression in the brain. Real-Time PCR in multiple tissues demonstrated that FLJ33706 was most abundantly expressed in brain. Human polymorphism data suggested that FLJ33706 encodes a protein under purifying selection. A specifically designed antibody detected its protein expression across human cortex, cerebellum and midbrain. Immunohistochemistry study in normal human brain cortex revealed the localization of FLJ33706 protein in neurons. Elevated expressions of FLJ33706 were detected in Alzheimer's brain samples, suggesting the role of this novel gene in human-specific pathogenesis of Alzheimer's disease. FLJ33706 provided the strongest evidence so far that human-specific de novo genes can have protein-coding potential and differential protein expression, and be involved in human brain functions.

  5. Human trafficking for labour exploitation:
    Interpreting the crime

    Directory of Open Access Journals (Sweden)

    Jill E.B. Coster van Voorhout

    2007-12-01

    Full Text Available The definition of human trafficking for labour exploitation, as follows from the European Council Framework Decision, proves to be unclear. Literal interpretation does not suffice, because it does not clarify all elements of what is deemed to be criminal behaviour, and hermeneutical interpretation also falls short discouraging the aim of this legislation, namely harmonisation. Hence, another solution is required. This article does so by firstly challenging assumptions about human trafficking for labour exploitation that are generally pertinent, but nonetheless untrue. This accurate appraisal of the crime’s nature is followed by a synopsis of national legislation and adjudication in three Member States, so as to also focus on these actualities regarding the crime that are commonly not conceived. This article examines two countries that have implemented the Framework Decision, namely Belgium and the Netherlands, and one that has not yet done so, the United Kingdom. Thereafter remaining unexplained elements of the Framework Decision’s definition are interpreted with use of international, pan-European and European legislation and adjudication. Based upon all this, a suggested interpretation of the Framework Decision’s definition is provided so as to overcome all identified difficulties with it.

  6. The American Nurses Association Code of Ethics: a reflection on the ethics of respect and human dignity with nurse as expert.

    Science.gov (United States)

    Milton, Constance L

    2003-10-01

    The American Nurses Association Code of Ethics for Nurses calls for the nurse to practice with compassion and respect for every individual. What are the ethics and challenges of practicing professional nursing with expertise and educating a new generation of nurses while incorporating the interpretive statements into practice? This column differentiates the traditional biomedical views on human dignity and respect while exploring the embedded ethics of respect and self-determination and what it truly means to be an expert of nursing from the theoretical perspective of the human becoming school of thought.

  7. Code system to compute radiation dose in human phantoms

    International Nuclear Information System (INIS)

    Ryman, J.C.; Cristy, M.; Eckerman, K.F.; Davis, J.L.; Tang, J.S.; Kerr, G.D.

    1986-01-01

    Monte Carlo photon transport code and a code using Monte Carlo integration of a point kernel have been revised to incorporate human phantom models for an adult female, juveniles of various ages, and a pregnant female at the end of the first trimester of pregnancy, in addition to the adult male used earlier. An analysis code has been developed for deriving recommended values of specific absorbed fractions of photon energy. The computer code system and calculational method are described, emphasizing recent improvements in methods

  8. Genome-wide identification of coding and non-coding conserved sequence tags in human and mouse genomes

    Directory of Open Access Journals (Sweden)

    Maggi Giorgio P

    2008-06-01

    Full Text Available Abstract Background The accurate detection of genes and the identification of functional regions is still an open issue in the annotation of genomic sequences. This problem affects new genomes but also those of very well studied organisms such as human and mouse where, despite the great efforts, the inventory of genes and regulatory regions is far from complete. Comparative genomics is an effective approach to address this problem. Unfortunately it is limited by the computational requirements needed to perform genome-wide comparisons and by the problem of discriminating between conserved coding and non-coding sequences. This discrimination is often based (thus dependent on the availability of annotated proteins. Results In this paper we present the results of a comprehensive comparison of human and mouse genomes performed with a new high throughput grid-based system which allows the rapid detection of conserved sequences and accurate assessment of their coding potential. By detecting clusters of coding conserved sequences the system is also suitable to accurately identify potential gene loci. Following this analysis we created a collection of human-mouse conserved sequence tags and carefully compared our results to reliable annotations in order to benchmark the reliability of our classifications. Strikingly we were able to detect several potential gene loci supported by EST sequences but not corresponding to as yet annotated genes. Conclusion Here we present a new system which allows comprehensive comparison of genomes to detect conserved coding and non-coding sequences and the identification of potential gene loci. Our system does not require the availability of any annotated sequence thus is suitable for the analysis of new or poorly annotated genomes.

  9. Imperative-program transformation by instrumented-interpreter specialization

    DEFF Research Database (Denmark)

    Debois, Søren

    2008-01-01

    We describe how to implement strength reduction, loop-invariant code motion and loop quasi-invariant code motion by specializing instrumented interpreters. To curb code duplication intrinsic to such specialization, we introduce a new program transformation, rewinding, which uses Moore-automata mi......We describe how to implement strength reduction, loop-invariant code motion and loop quasi-invariant code motion by specializing instrumented interpreters. To curb code duplication intrinsic to such specialization, we introduce a new program transformation, rewinding, which uses Moore...

  10. Transitioning from interpretive to predictive in thermal hydraulic codes

    International Nuclear Information System (INIS)

    Mousseau, V.A.

    2004-01-01

    simulation tools. These different computer codes have then been loosely coupled to represent the transient. It is important to note that the accuracy of the transient is determined by the largest error in the system. For example, if neutron diffusion and thermal conduction are each solved in a second order in time accurate manner, but they are only exchange information every five time steps, then the system is first order in time since the coupling is first order in time. Therefore, it is important to ensure that the level of accuracy used to couple two different computer codes is as accurate as the two codes being coupled. Focusing in on the reactor cooling system (the two phase flow and the heat conduction) it is important to solve the coupling between phases and the wall as accurately as possible. In RELAP these two pieces of nonlinearly coupled physics are solved in separate linear systems. The RELAP solution procedure is to take a nonlinear system of equations, split them into two separate pieces, linearize and solve the fluid flow and heat conduction separately, then couple them together in an explicit fashion. It should be noted that different versions of RELAP allow for linearly implicit coupling of the fluid flow and wall heat transfer under special conditions. This manuscript will examine the reactor cooling system and present an analysis of the error caused by linearizing and splitting nonlinearly coupled physics. Modern computers and numerical methods allow for this system of nonlinear equations to be solved without linearizing and splitting. This coupled approach is referred to as an implicitly balanced solution method. Because of the fact that future thermal hydraulic codes will be required to move from the low accuracy requirements of data interpretation to the high accuracy requirements of data prediction, the accuracy of current thermal hydraulic codes need to be increased. This manuscript provides some of the first steps required to analyze the temporal

  11. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  12. Human biomonitoring data interpretation and ethics; obstacles or surmountable challenges?

    Directory of Open Access Journals (Sweden)

    Sepai Ovnair

    2008-01-01

    Full Text Available Abstract The use of human samples to assess environmental exposure and uptake of chemicals is more than an analytical exercise and requires consideration of the utility and interpretation of data as well as due consideration of ethical issues. These aspects are inextricably linked. In 2004 the EC expressed its commitment to the development of a harmonised approach to human biomonitoring (HBM by including an action in the EU Environment and Health Strategy to develop a Human Biomonitoring Pilot Study. This further underlined the need for interpretation strategies as well as guidance on ethical issues. A workshop held in December 2006 brought together stakeholders from academia, policy makers as well as non-governmental organisations and chemical industry associations to a two day workshop built a mutual understanding of the issues in an open and frank discussion forum. This paper describes the discussion and recommendations from the workshop. The workshop developed key recommendations for a Pan-European HBM Study: 1. A strategy for the interpretation of human biomonitoring data should be developed. 2. The pilot study should include the development of a strategy to integrate health data and environmental monitoring with human biomonitoring data at national and international levels. 3. Communication strategies should be developed when designing the study and evolve as the study continues. 4. Early communication with stakeholders is essential to achieve maximum efficacy of policy developments and facilitate subsequent monitoring. 5. Member states will have to apply individually for project approval from their National Research Ethics Committees. 6. The study population needs to have sufficient information on the way data will be gathered, interpreted and disseminated and how samples will be stored and used in the future (if biobanking before they can give informed consent. 7. The participants must be given the option of anonymity. This has an impact

  13. Informal interpreting in general practice: Are interpreters' roles related to perceived control, trust, and satisfaction?

    Science.gov (United States)

    Zendedel, Rena; Schouten, Barbara C; van Weert, Julia C M; van den Putte, Bas

    2018-06-01

    The aim of this observational study was twofold. First, we examined how often and which roles informal interpreters performed during consultations between Turkish-Dutch migrant patients and general practitioners (GPs). Second, relations between these roles and patients' and GPs' perceived control, trust in informal interpreters and satisfaction with the consultation were assessed. A coding instrument was developed to quantitatively code informal interpreters' roles from transcripts of 84 audio-recorded interpreter-mediated consultations in general practice. Patients' and GPs' perceived control, trust and satisfaction were assessed in a post consultation questionnaire. Informal interpreters most often performed the conduit role (almost 25% of all coded utterances), and also frequently acted as replacers and excluders of patients and GPs by asking and answering questions on their own behalf, and by ignoring and omitting patients' and GPs' utterances. The role of information source was negatively related to patients' trust and the role of GP excluder was negatively related to patients' perceived control. Patients and GPs are possibly insufficiently aware of the performed roles of informal interpreters, as these were barely related to patients' and GPs' perceived trust, control and satisfaction. Patients and GPs should be educated about the possible negative consequences of informal interpreting. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Development and Application of a Code for Internal Exposure (CINEX) based on the CINDY code

    International Nuclear Information System (INIS)

    Kravchik, T.; Duchan, N.; Sarah, R.; Gabay, Y.; Kol, R.

    2004-01-01

    Internal exposure to radioactive materials at the NRCN is evaluated using the CINDY (Code for Internal Dosimetry) Package. The code was developed by the Pacific Northwest Laboratory to assist the interpretation of bioassay data, provide bioassay projections and evaluate committed and calendar-year doses from intake or bioassay measurement data. It provides capabilities to calculate organ dose and effective dose equivalents using the International Commission on Radiological Protection (ICRP) 30 approach. The CINDY code operates under DOS operating system and consequently its operation needs a relatively long procedure which also includes a lot of manual typing that can lead to personal human mistakes. A new code has been developed at the NRCN, the CINEX (Code for Internal Exposure), which is an Excel application and leads to a significant reduction in calculation time (in the order of 5-10 times) and in the risk of personal human mistakes. The code uses a database containing tables which were constructed by the CINDY and contain the bioassay values predicted by the ICRP30 model after an intake of an activity unit of each isotope. Using the database, the code than calculates the appropriate intake and consequently the committed effective dose and organ dose. Calculations with the CINEX code were compared to similar calculations with the CINDY code. The discrepancies were less than 5%, which is the rounding error of the CINDY code. Attached is a table which compares parameters calculated with the CINEX and the CINDY codes (for a class Y uranium). The CINEX is now used at the NRCN to calculate occupational intakes and doses to workers with radioactive materials

  15. Neuro-Impressions: Interpreting the Nature of Human Creativity

    Directory of Open Access Journals (Sweden)

    Todd Lael Siler

    2012-10-01

    Full Text Available Understanding the creative process is essential for realizing human potential. Over the past four decades, the author has explored this subject through his brain-inspired drawings, paintings, symbolic sculptures, and experimental art installations that present myriad impressions of human creativity. These impressionistic artworks interpret rather than illustrate the complexities of the creative process. They draw insights from empirical studies that correlate how human beings create, learn, remember, innovate, and communicate. In addition to offering fresh aesthetic experiences, this metaphorical art raises fundamental questions concerning the deep connections between the brain and its creations. The author describes his artworks as embodiments of everyday observations about the neuropsychology of creativity, and its all-purpose applications for stimulating and accelerating innovation.

  16. Promoter Analysis Reveals Globally Differential Regulation of Human Long Non-Coding RNA and Protein-Coding Genes

    KAUST Repository

    Alam, Tanvir

    2014-10-02

    Transcriptional regulation of protein-coding genes is increasingly well-understood on a global scale, yet no comparable information exists for long non-coding RNA (lncRNA) genes, which were recently recognized to be as numerous as protein-coding genes in mammalian genomes. We performed a genome-wide comparative analysis of the promoters of human lncRNA and protein-coding genes, finding global differences in specific genetic and epigenetic features relevant to transcriptional regulation. These two groups of genes are hence subject to separate transcriptional regulatory programs, including distinct transcription factor (TF) proteins that significantly favor lncRNA, rather than coding-gene, promoters. We report a specific signature of promoter-proximal transcriptional regulation of lncRNA genes, including several distinct transcription factor binding sites (TFBS). Experimental DNase I hypersensitive site profiles are consistent with active configurations of these lncRNA TFBS sets in diverse human cell types. TFBS ChIP-seq datasets confirm the binding events that we predicted using computational approaches for a subset of factors. For several TFs known to be directly regulated by lncRNAs, we find that their putative TFBSs are enriched at lncRNA promoters, suggesting that the TFs and the lncRNAs may participate in a bidirectional feedback loop regulatory network. Accordingly, cells may be able to modulate lncRNA expression levels independently of mRNA levels via distinct regulatory pathways. Our results also raise the possibility that, given the historical reliance on protein-coding gene catalogs to define the chromatin states of active promoters, a revision of these chromatin signature profiles to incorporate expressed lncRNA genes is warranted in the future.

  17. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  18. Human Rights in Natural Science and Technology Professions’ Codes of Ethics?

    OpenAIRE

    Haugen, Hans Morten

    2013-01-01

    Abstract: No global professional codes for the natural science and technology professions exist. In light of how the application of new technology can affect individuals and communities, this discrepancy warrants greater scrutiny. This article analyzes the most relevant processes and seeks to explain why these processes have not resulted in global codes. Moreover, based on a human rights approach, the article gives recommendations on the future process and content of codes for ...

  19. A decision support system and rule-based algorithm to augment the human interpretation of the 12-lead electrocardiogram.

    Science.gov (United States)

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J

    The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct

  20. Human Motion Capture Data Tailored Transform Coding.

    Science.gov (United States)

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  1. Visual gravity cues in the interpretation of biological movements: neural correlates in humans.

    Science.gov (United States)

    Maffei, Vincenzo; Indovina, Iole; Macaluso, Emiliano; Ivanenko, Yuri P; A Orban, Guy; Lacquaniti, Francesco

    2015-01-01

    Our visual system takes into account the effects of Earth gravity to interpret biological motion (BM), but the neural substrates of this process remain unclear. Here we measured functional magnetic resonance (fMRI) signals while participants viewed intact or scrambled stick-figure animations of walking, running, hopping, and skipping recorded at normal or reduced gravity. We found that regions sensitive to BM configuration in the occipito-temporal cortex (OTC) were more active for reduced than normal gravity but with intact stimuli only. Effective connectivity analysis suggests that predictive coding of gravity effects underlies BM interpretation. This process might be implemented by a family of snapshot neurons involved in action monitoring. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  3. Annotating non-coding regions of the genome.

    Science.gov (United States)

    Alexander, Roger P; Fang, Gang; Rozowsky, Joel; Snyder, Michael; Gerstein, Mark B

    2010-08-01

    Most of the human genome consists of non-protein-coding DNA. Recently, progress has been made in annotating these non-coding regions through the interpretation of functional genomics experiments and comparative sequence analysis. One can conceptualize functional genomics analysis as involving a sequence of steps: turning the output of an experiment into a 'signal' at each base pair of the genome; smoothing this signal and segmenting it into small blocks of initial annotation; and then clustering these small blocks into larger derived annotations and networks. Finally, one can relate functional genomics annotations to conserved units and measures of conservation derived from comparative sequence analysis.

  4. Cracking the code: residents' interpretations of written assessment comments

    NARCIS (Netherlands)

    Ginsburg, S.; Vleuten, C.P.M. van der; Eva, K.W.; Lingard, L.

    2017-01-01

    CONTEXT: Interest is growing in the use of qualitative data for assessment. Written comments on residents' in-training evaluation reports (ITERs) can be reliably rank-ordered by faculty attendings, who are adept at interpreting these narratives. However, if residents do not interpret assessment

  5. Development of a ECOREA-II code for human exposures from radionuclides through food chain

    International Nuclear Information System (INIS)

    Yoo, D. H.; Choi, Y. H.

    2001-01-01

    The release of radionuclides from nuclear facilities following an accident into air results in human exposures through two pathways. One is direct human exposures by inhalation or dermal absorption of these radionucles. Another is indirect human exposures through food chain which includes intakes of plant products such as rice, vegetables from contaiminated soil and animal products such as meet, milk and eggs feeded by contaminated grasses or plants on the terrestial surface. This study presents efforts of the development of a computer code for the assessment of the indirect human exposure through such food chains. The purpose of ECOREA-II code is to develop appropriate models suitable for a specific soil condition in Korea based on previous experimental efforts and to provide a more user-friendly environment such as GUI for the use of the code. Therefore, the current code, when more fully developed, is expected to increase the understanding of environmental safety assessment of nuclear facilities following an accident and provide a reasonable regulatory guideline with respecte to food safety issues

  6. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  7. Interpretation of the CABRI LT1 test with SAS4A-code analysis

    International Nuclear Information System (INIS)

    Sato, Ikken; Onoda, Yu-uichi

    2001-03-01

    In the CABRI-FAST LT1 test, simulating a ULOF (Unprotected Loss of Flow) accident of LMFBR, pin failure took place rather early during the transient. No fuel melting is expected at this failure because the energy injection was too low and a rapid gas-release-like response leading to coolant-channel voiding was observed. This channel voiding was followed by a gradual fuel breakup and axial relocation. With an aid of SAS4A analysis, interpretation of this test was performed. Although the original SAS4A model was not well fitted to this type of early pin failure, the global behavior after the pin failure was reasonably simulated with temporary modifications. Through this study, gas release behavior from the failed fuel pin and its effect on further transient were well understood. It was also demonstrated that the SAS4A code has a potential to simulate the post-failure behavior initiated by a very early pin failure provided that necessary model modification is given. (author)

  8. Proton absorbed dose distribution in human eye simulated by SRNA-2KG code

    International Nuclear Information System (INIS)

    Ilic, R. D.; Pavlovic, R.

    2004-01-01

    The model of Monte Carlo SRNA code is described together with some numerical experiments to show feasibility of this code to be used in proton therapy, especially for tree dimensional proton absorption dose calculation in human eye. (author) [sr

  9. Cling - The LLVM-based C++ Interpreter

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cling (http://cern.ch/cling) is a C++ interpreter, built on top of clang (http://clang.llvm.org) and LLVM (http://llvm.org). Like its predecessor CINT, cling offers an interactive, terminal-like prompt. It enables exploratory programming with rapid edit / run cycles. The ROOT team has more than 15 years of experience with C++ interpreters, and this has been fully exploited in the design of cling. However, matching the concepts of an interpreter to a compiler library is a non-trivial task; we will explain how this is done for cling, and how we managed to implement cling as a small (10,000 lines of code) extension to the clang and llvm libraries. The resulting features clearly show the advantages of basing an interpreter on a compiler. Cling uses clang's praised concise and easy to understand diagnostics. Building an interpreter on top of a compiler library makes the transition between interpreted and compiled code much easier and smoother. We will present the design, e.g. how cling treats the C++ extensions ...

  10. The Prominent Role of National Judges in Interpreting the International Definition of Human Trafficking

    Directory of Open Access Journals (Sweden)

    Luuk B Esser

    2016-05-01

    Full Text Available Although there has been much discussion of the scope of the concept of human trafficking in international literature, the part played by national courts in interpreting definitions based on the international definition of human trafficking in the UN Trafficking Protocol has received little attention. When a judge interprets an offence, he or she clarifies or adds new meaning to it. The space for this is even greater when the underlying definition is broadly formulated, as in the case of the international definition of human trafficking. This article demonstrates that, although this international definition establishes the outer parameters within which conduct must be made a criminal offence, domestic courts still have room to flesh out the definition in national contexts. The role of national judges needs more consideration in today’s discourse on the legal definition of human trafficking.

  11. Writing Compilers and Interpreters A Software Engineering Approach

    CERN Document Server

    Mak, Ronald

    2011-01-01

    Long-awaited revision to a unique guide that covers both compilers and interpreters Revised, updated, and now focusing on Java instead of C++, this long-awaited, latest edition of this popular book teaches programmers and software engineering students how to write compilers and interpreters using Java. You?ll write compilers and interpreters as case studies, generating general assembly code for a Java Virtual Machine that takes advantage of the Java Collections Framework to shorten and simplify the code. In addition, coverage includes Java Collections Framework, UML modeling, object-oriented p

  12. A symbiotic liaison between the genetic and epigenetic code

    Directory of Open Access Journals (Sweden)

    Holger eHeyn

    2014-05-01

    Full Text Available With rapid advances in sequencing technologies, we are undergoing a paradigm shift from hypothesis- to data-driven research. Genome-wide profiling efforts gave informative insights into biological processes; however, considering the wealth of variation, the major challenge remains their meaningful interpretation. In particular sequence variation in non-coding contexts is often challenging to interpret. Here, data integration approaches for the identification of functional genetic variability represent a likely solution. Exemplary, functional linkage analysis integrating genotype and expression data determined regulatory quantitative trait loci (QTL and proposed causal relationships. In addition to gene expression, epigenetic regulation and specifically DNA methylation was established as highly valuable surrogate mark for functional variance of the genetic code. Epigenetic modification served as powerful mediator trait to elucidate mechanisms forming phenotypes in health and disease. Particularly, integrative studies of genetic and DNA methylation data yet guided interpretation strategies of risk genotypes, but also proved their value for physiological traits, such as natural human variation and aging. This Perspective seeks to illustrate the power of data integration in the genomic era exemplified by DNA methylation quantitative trait loci (meQTLs. However, the model is further extendable to virtually all traceable molecular traits.

  13. Human resources managers as custodians of the King III code

    Directory of Open Access Journals (Sweden)

    Frank de Beer

    2015-05-01

    Full Text Available The objective of this research was to perform an exploratory study on the knowledge and understanding of the King III code among Human Resources (HR managers in South African companies. The King III code is a comprehensive international corporate governance regime which addresses the financial, social, ethical and environmental practices of organisations. HR management plays a role in managing corporate governance by using the King III code as a guideline. The main research questions were: Does HR management know, understand, apply, and have the ability to use the King III code in terms of ethical decision-making? What role does HR management play in corporate governance? A random sample of available HR managers, senior HR consultants and HR directors was taken and semi-structured interviews were conducted. The results indicated that the respondents had no in-depth knowledge of the King III code. They did not fully understand the King III code and its implications nor did they use it to ensure ethical management. The themes most emphasised by the participants were: culture, reward and remuneration, policies and procedures and performance management. The participants emphasised the importance of these items  and HR’s role in managing them.

  14. The neural code for face orientation in the human fusiform face area.

    Science.gov (United States)

    Ramírez, Fernando M; Cichy, Radoslaw M; Allefeld, Carsten; Haynes, John-Dylan

    2014-09-03

    Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA. Copyright © 2014 the authors 0270-6474/14/3412155-13$15.00/0.

  15. DEEP code to calculate dose equivalents in human phantom for external photon exposure by Monte Carlo method

    International Nuclear Information System (INIS)

    Yamaguchi, Yasuhiro

    1991-01-01

    The present report describes a computer code DEEP which calculates the organ dose equivalents and the effective dose equivalent for external photon exposure by the Monte Carlo method. MORSE-CG, Monte Carlo radiation transport code, is incorporated into the DEEP code to simulate photon transport phenomena in and around a human body. The code treats an anthropomorphic phantom represented by mathematical formulae and user has a choice for the phantom sex: male, female and unisex. The phantom can wear personal dosimeters on it and user can specify their location and dimension. This document includes instruction and sample problem for the code as well as the general description of dose calculation, human phantom and computer code. (author)

  16. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  17. Sign Language Interpreting in Theatre: Using the Human Body to Create Pictures of the Human Soul

    Directory of Open Access Journals (Sweden)

    Michael Richardson

    2017-06-01

    Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium

  18. Architectural design of an Algol interpreter

    Science.gov (United States)

    Jackson, C. K.

    1971-01-01

    The design of a syntax-directed interpreter for a subset of Algol is described. It is a conceptual design with sufficient details and completeness but as much independence of implementation as possible. The design includes a detailed description of a scanner, an analyzer described in the Floyd-Evans productions, a hash-coded symbol table, and an executor. Interpretation of sample programs is also provided to show how the interpreter functions.

  19. Quick response codes in Orthodontics

    Directory of Open Access Journals (Sweden)

    Moidin Shakil

    2015-01-01

    Full Text Available Quick response (QR code codes are two-dimensional barcodes, which encodes for a large amount of information. QR codes in Orthodontics are an innovative approach in which patient details, radiographic interpretation, and treatment plan can be encoded. Implementing QR code in Orthodontics will save time, reduces paperwork, and minimizes manual efforts in storage and retrieval of patient information during subsequent stages of treatment.

  20. Interpretation of basic concepts in theories of human motor abilities

    Directory of Open Access Journals (Sweden)

    Petrović Adam

    2014-01-01

    Full Text Available The basic aim of this research is to point to the possible language, logical and knowledge problems in interpretation and understanding of basic concepts in theories of motor abilities (TMA. Such manner of review is not directed only to 'mere understanding', it can lead to a new growth of scientific knowledge. Accordingly, the research question is set, i.e. the research issue: Is there a language, logical and knowledge agreement between basic concepts in the theories of human motor abilities? The answer to the set question direct that a more complete agreement between the basic concepts in the theories of human motor abilities should be searched in a scientific dialog between researchers of various beliefs.

  1. Comprehensive reconstruction andvisualization of non-coding regulatorynetworks in human

    Directory of Open Access Journals (Sweden)

    Vincenzo eBonnici

    2014-12-01

    Full Text Available Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs. Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established online repositories. The interactions involve RNA, DNA, proteins and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command line and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  2. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  3. NODAL interpreter for CP/M

    International Nuclear Information System (INIS)

    Oide, Katsunobu.

    1982-11-01

    A NODAL interpreter which works under CP/M operating system is made for microcomputers. This interpreter language named NODAL-80 has a similar structure to the NODAL of SPS, but its commands, variables, and expressions are modified to increase the flexibility of programming. NODAL-80 also uses a simple intermediate code to make the execution speed fast without imposing any restriction on the dynamic feature of NODAL language. (author)

  4. Interrelations of codes in human semiotic systems.

    OpenAIRE

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  5. Glucose modulates food-related salience coding of midbrain neurons in humans.

    Science.gov (United States)

    Ulrich, Martin; Endres, Felix; Kölle, Markus; Adolph, Oliver; Widenhorn-Müller, Katharina; Grön, Georg

    2016-12-01

    Although early rat studies demonstrated that administration of glucose diminishes dopaminergic midbrain activity, evidence in humans has been lacking so far. In the present functional magnetic resonance imaging study, glucose was intravenously infused in healthy human male participants while seeing images depicting low-caloric food (LC), high-caloric food (HC), and non-food (NF) during a food/NF discrimination task. Analysis of brain activation focused on the ventral tegmental area (VTA) as the origin of the mesolimbic system involved in salience coding. Under unmodulated fasting baseline conditions, VTA activation was greater during HC compared with LC food cues. Subsequent to infusion of glucose, this difference in VTA activation as a function of caloric load leveled off and even reversed. In a control group not receiving glucose, VTA activation during HC relative to LC cues remained stable throughout the course of the experiment. Similar treatment-specific patterns of brain activation were observed for the hypothalamus. The present findings show for the first time in humans that glucose infusion modulates salience coding mediated by the VTA. Hum Brain Mapp 37:4376-4384, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Indirect assessment of an interpretation bias in humans: Neurophysiological and behavioral correlates

    Directory of Open Access Journals (Sweden)

    Anita eSchick

    2013-06-01

    Full Text Available Affective state can influence cognition leading to biased information processing, interpretation, attention, and memory. Such bias has been reported to be essential for the onset and maintenance of different psychopathologies, particularly affective disorders. However, empirical evidence has been very heterogeneous and little is known about the neurophysiological mechanisms underlying cognitive bias and its time-course. We therefore investigated the interpretation of ambiguous stimuli as indicators of biased information processing with an ambiguous cue-conditioning paradigm. In an acquisition phase, participants learned to discriminate two tones of different frequency, which acquired emotional and motivational value due to subsequent feedback (monetary gain or avoidance of monetary loss. In the test phase, three additional tones of intermediate frequencies were presented, whose interpretation as positive (approach of reward or negative (avoidance of punishment, indicated by a button press, was used as an indicator of the bias. Twenty healthy volunteers participated in this paradigm while a 64-channel electroencephalogram was recorded. Participants also completed questionnaires assessing individual differences in depression and rumination. Overall, we found a small positive bias, which correlated negatively with reflective pondering, a type of rumination. As expected, reaction times were increased for intermediate tones. ERP amplitudes between 300 – 700 ms post-stimulus differed depending on the interpretation of the intermediate tones. A negative compared to a positive interpretation led to an amplitude increase over frontal electrodes. Our study provides evidence that in humans, as in animal research, the ambiguous cue-conditioning paradigm is a valid procedure for indirectly assessing ambiguous cue interpretation and a potential interpretation bias, which is sensitive to individual differences in affect-related traits.

  7. The spirituality of human consciousness: a Catholic evaluation of some current neuro-scientific interpretations.

    Science.gov (United States)

    McGoldrick, Terence A

    2012-09-01

    Catholic theology's traditional understanding of the spiritual nature of the human person begins with the idea of a rational soul and human mind that is made manifest in free will--the spiritual experience of the act of consciousness and cause of all human arts. The rationale for this religion-based idea of personhood is key to understanding ethical dilemmas posed by modern research that applies a more empirical methodology in its interpretations about the cause of human consciousness. Applications of these beliefs about the body/soul composite to the theory of evolution and to discoveries in neuroscience, paleoanthropology, as well as to recent animal intelligence studies, can be interpreted from this religious and philosophical perspective, which argues for the human soul as the unifying cause of the person's unique abilities. Free will and consciousness are at the nexus of the mutual influence of body and soul upon one another in the traditional Catholic view, that argues for a spiritual dimension to personality that is on a par with the physical metabolic processes at play. Therapies that affect consciousness are ethically problematic, because of their implications for free will and human dignity. Studies of resilience, as an example, argue for the greater, albeit limited, role of the soul's conscious choices in healing as opposed to metabolic or physical changes to the brain alone.

  8. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    Directory of Open Access Journals (Sweden)

    Jacquelyn A Shelton

    Full Text Available Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule, the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (nonlinear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.

  9. Differential DNA methylation profiles of coding and non-coding genes define hippocampal sclerosis in human temporal lobe epilepsy

    Science.gov (United States)

    Miller-Delaney, Suzanne F.C.; Bryan, Kenneth; Das, Sudipto; McKiernan, Ross C.; Bray, Isabella M.; Reynolds, James P.; Gwinn, Ryder; Stallings, Raymond L.

    2015-01-01

    Temporal lobe epilepsy is associated with large-scale, wide-ranging changes in gene expression in the hippocampus. Epigenetic changes to DNA are attractive mechanisms to explain the sustained hyperexcitability of chronic epilepsy. Here, through methylation analysis of all annotated C-phosphate-G islands and promoter regions in the human genome, we report a pilot study of the methylation profiles of temporal lobe epilepsy with or without hippocampal sclerosis. Furthermore, by comparative analysis of expression and promoter methylation, we identify methylation sensitive non-coding RNA in human temporal lobe epilepsy. A total of 146 protein-coding genes exhibited altered DNA methylation in temporal lobe epilepsy hippocampus (n = 9) when compared to control (n = 5), with 81.5% of the promoters of these genes displaying hypermethylation. Unique methylation profiles were evident in temporal lobe epilepsy with or without hippocampal sclerosis, in addition to a common methylation profile regardless of pathology grade. Gene ontology terms associated with development, neuron remodelling and neuron maturation were over-represented in the methylation profile of Watson Grade 1 samples (mild hippocampal sclerosis). In addition to genes associated with neuronal, neurotransmitter/synaptic transmission and cell death functions, differential hypermethylation of genes associated with transcriptional regulation was evident in temporal lobe epilepsy, but overall few genes previously associated with epilepsy were among the differentially methylated. Finally, a panel of 13, methylation-sensitive microRNA were identified in temporal lobe epilepsy including MIR27A, miR-193a-5p (MIR193A) and miR-876-3p (MIR876), and the differential methylation of long non-coding RNA documented for the first time. The present study therefore reports select, genome-wide DNA methylation changes in human temporal lobe epilepsy that may contribute to the molecular architecture of the epileptic brain. PMID

  10. The nuclear codes and guidelines

    International Nuclear Information System (INIS)

    Sonter, M.

    1984-01-01

    This paper considers problems faced by the mining industry when implementing the nuclear codes of practice. Errors of interpretation are likely. A major criticism is that the guidelines to the codes must be seen as recommendations only. They are not regulations. Specific clauses in the guidelines are criticised

  11. A code of ethics for evidence-based research with ancient human remains.

    Science.gov (United States)

    Kreissl Lonfat, Bettina M; Kaufmann, Ina Maria; Rühli, Frank

    2015-06-01

    As clinical research constantly advances and the concept of evolution becomes a strong and influential part of basic medical research, the absence of a discourse that deals with the use of ancient human remains in evidence-based research is becoming unbearable. While topics such as exhibition and excavation of human remains are established ethical fields of discourse, when faced with instrumentalization of ancient human remains for research (i.e., ancient DNA extractions for disease marker analyses) the answers from traditional ethics or even more practical fields of bio-ethics or more specific biomedical ethics are rare to non-existent. The Centre for Evolutionary Medicine at the University of Zurich solved their needs for discursive action through the writing of a self-given code of ethics which was written in dialogue with the researchers at the Institute and was published online in Sept. 2011: http://evolutionäremedizin.ch/coe/. The philosophico-ethical basis for this a code of conduct and ethics and the methods are published in this article. © 2015 Wiley Periodicals, Inc.

  12. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  13. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  14. The Problems of Interpretation of the European Convention for the Protection of Human Rights and Fundamental Freedoms in the European Court of Human Rights

    Directory of Open Access Journals (Sweden)

    Ivanets Ivan Petrovich

    2014-06-01

    Full Text Available According to the clause 1 of Article 32 of the European Convention for the protection of Human Rights and Fundamental Freedoms of 1950 (hereinafter referred to as the European Convention or the Convention the competence of the European Court of Human Rights (hereinafter referred to as the Court or the Court extends to all issues of interpretation and application of the Convention and its protocols. Thus, the European Convention makes the Court the only tool of the way of understanding of the rights and freedoms protected by it. Interpretation of the provisions of Convention lies in the basis of the Court activity as immovable clod that stands guard for protection of human rights, and that is a place where the State is directly responsible before a human.

  15. Qualification of the AUTOBUS Mod. 2 Code

    International Nuclear Information System (INIS)

    Ciarniello, U.; Peroni, P.

    1988-01-01

    The paper presents the qualification of AUTOBUS MOD.2 code. After a brief description of the code itself, all the critical experiments simulated by the code are illustrated to prove the accuracy of criticality calculation and power distribution. An interpretation of the results and a conclusion close this presentation

  16. 2-D skin-current toroidal-MHD-equilibrium code

    International Nuclear Information System (INIS)

    Feinberg, B.; Niland, R.A.; Coonrod, J.; Levine, M.A.

    1982-09-01

    A two-dimensional, toroidal, ideal MHD skin-current equilibrium computer code is described. The code is suitable for interactive implementation on a minicomptuer. Some examples of the use of the code for design and interpretation of toroidal cusp experiments are presented

  17. Distribution of absorbed dose in human eye simulated by SRNA-2KG computer code

    International Nuclear Information System (INIS)

    Ilic, R.; Pesic, M.; Pavlovic, R.; Mostacci, D.

    2003-01-01

    Rapidly increasing performances of personal computers and development of codes for proton transport based on Monte Carlo methods will allow, very soon, the introduction of the computer planning proton therapy as a normal activity in regular hospital procedures. A description of SRNA code used for such applications and results of calculated distributions of proton-absorbed dose in human eye are given in this paper. (author)

  18. Codon usage and expression level of human mitochondrial 13 protein coding genes across six continents.

    Science.gov (United States)

    Chakraborty, Supriyo; Uddin, Arif; Mazumder, Tarikul Huda; Choudhury, Monisha Nath; Malakar, Arup Kumar; Paul, Prosenjit; Halder, Binata; Deka, Himangshu; Mazumder, Gulshana Akthar; Barbhuiya, Riazul Ahmed; Barbhuiya, Masuk Ahmed; Devi, Warepam Jesmi

    2017-12-02

    The study of codon usage coupled with phylogenetic analysis is an important tool to understand the genetic and evolutionary relationship of a gene. The 13 protein coding genes of human mitochondria are involved in electron transport chain for the generation of energy currency (ATP). However, no work has yet been reported on the codon usage of the mitochondrial protein coding genes across six continents. To understand the patterns of codon usage in mitochondrial genes across six different continents, we used bioinformatic analyses to analyze the protein coding genes. The codon usage bias was low as revealed from high ENC value. Correlation between codon usage and GC3 suggested that all the codons ending with G/C were positively correlated with GC3 but vice versa for A/T ending codons with the exception of ND4L and ND5 genes. Neutrality plot revealed that for the genes ATP6, COI, COIII, CYB, ND4 and ND4L, natural selection might have played a major role while mutation pressure might have played a dominant role in the codon usage bias of ATP8, COII, ND1, ND2, ND3, ND5 and ND6 genes. Phylogenetic analysis indicated that evolutionary relationships in each of 13 protein coding genes of human mitochondria were different across six continents and further suggested that geographical distance was an important factor for the origin and evolution of 13 protein coding genes of human mitochondria. Copyright © 2017 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  19. Identification of evolutionarily conserved non-AUG-initiated N-terminal extensions in human coding sequences.

    LENUS (Irish Health Repository)

    Ivanov, Ivaylo P

    2011-05-01

    In eukaryotes, it is generally assumed that translation initiation occurs at the AUG codon closest to the messenger RNA 5\\' cap. However, in certain cases, initiation can occur at codons differing from AUG by a single nucleotide, especially the codons CUG, UUG, GUG, ACG, AUA and AUU. While non-AUG initiation has been experimentally verified for a handful of human genes, the full extent to which this phenomenon is utilized--both for increased coding capacity and potentially also for novel regulatory mechanisms--remains unclear. To address this issue, and hence to improve the quality of existing coding sequence annotations, we developed a methodology based on phylogenetic analysis of predicted 5\\' untranslated regions from orthologous genes. We use evolutionary signatures of protein-coding sequences as an indicator of translation initiation upstream of annotated coding sequences. Our search identified novel conserved potential non-AUG-initiated N-terminal extensions in 42 human genes including VANGL2, FGFR1, KCNN4, TRPV6, HDGF, CITED2, EIF4G3 and NTF3, and also affirmed the conservation of known non-AUG-initiated extensions in 17 other genes. In several instances, we have been able to obtain independent experimental evidence of the expression of non-AUG-initiated products from the previously published literature and ribosome profiling data.

  20. Long Non-Coding RNAs Associated with Metabolic Traits in Human White Adipose Tissue

    Directory of Open Access Journals (Sweden)

    Hui Gao

    2018-04-01

    Full Text Available Long non-coding RNAs (lncRNAs belong to a recently discovered class of molecules proposed to regulate various cellular processes. Here, we systematically analyzed their expression in human subcutaneous white adipose tissue (WAT and found that a limited set was differentially expressed in obesity and/or the insulin resistant state. Two lncRNAs herein termed adipocyte-specific metabolic related lncRNAs, ASMER-1 and ASMER-2 were enriched in adipocytes and regulated by both obesity and insulin resistance. Knockdown of either ASMER-1 or ASMER-2 by antisense oligonucleotides in in vitro differentiated human adipocytes revealed that both genes regulated adipogenesis, lipid mobilization and adiponectin secretion. The observed effects could be attributed to crosstalk between ASMERs and genes within the master regulatory pathways for adipocyte function including PPARG and INSR. Altogether, our data demonstrate that lncRNAs are modulators of the metabolic and secretory functions in human fat cells and provide an emerging link between WAT and common metabolic conditions. Keywords: White adipose tissue, Adipocytes, Long non-coding RNAs, Metabolic traits, Lipolysis, Adiponectin

  1. Non-codingRNA sequence variations in human chronic lymphocytic leukemia and colorectal cancer.

    Science.gov (United States)

    Wojcik, Sylwia E; Rossi, Simona; Shimizu, Masayoshi; Nicoloso, Milena S; Cimmino, Amelia; Alder, Hansjuerg; Herlea, Vlad; Rassenti, Laura Z; Rai, Kanti R; Kipps, Thomas J; Keating, Michael J; Croce, Carlo M; Calin, George A

    2010-02-01

    Cancer is a genetic disease in which the interplay between alterations in protein-coding genes and non-coding RNAs (ncRNAs) plays a fundamental role. In recent years, the full coding component of the human genome was sequenced in various cancers, whereas such attempts related to ncRNAs are still fragmentary. We screened genomic DNAs for sequence variations in 148 microRNAs (miRNAs) and ultraconserved regions (UCRs) loci in patients with chronic lymphocytic leukemia (CLL) or colorectal cancer (CRC) by Sanger technique and further tried to elucidate the functional consequences of some of these variations. We found sequence variations in miRNAs in both sporadic and familial CLL cases, mutations of UCRs in CLLs and CRCs and, in certain instances, detected functional effects of these variations. Furthermore, by integrating our data with previously published data on miRNA sequence variations, we have created a catalog of DNA sequence variations in miRNAs/ultraconserved genes in human cancers. These findings argue that ncRNAs are targeted by both germ line and somatic mutations as well as by single-nucleotide polymorphisms with functional significance for human tumorigenesis. Sequence variations in ncRNA loci are frequent and some have functional and biological significance. Such information can be exploited to further investigate on a genome-wide scale the frequency of genetic variations in ncRNAs and their functional meaning, as well as for the development of new diagnostic and prognostic markers for leukemias and carcinomas.

  2. The origins and evolutionary history of human non-coding RNA regulatory networks.

    Science.gov (United States)

    Sherafatian, Masih; Mowla, Seyed Javad

    2017-04-01

    The evolutionary history and origin of the regulatory function of animal non-coding RNAs are not well understood. Lack of conservation of long non-coding RNAs and small sizes of microRNAs has been major obstacles in their phylogenetic analysis. In this study, we tried to shed more light on the evolution of ncRNA regulatory networks by changing our phylogenetic strategy to focus on the evolutionary pattern of their protein coding targets. We used available target databases of miRNAs and lncRNAs to find their protein coding targets in human. We were able to recognize evolutionary hallmarks of ncRNA targets by phylostratigraphic analysis. We found the conventional 3'-UTR and lesser known 5'-UTR targets of miRNAs to be enriched at three consecutive phylostrata. Firstly, in eukaryata phylostratum corresponding to the emergence of miRNAs, our study revealed that miRNA targets function primarily in cell cycle processes. Moreover, the same overrepresentation of the targets observed in the next two consecutive phylostrata, opisthokonta and eumetazoa, corresponded to the expansion periods of miRNAs in animals evolution. Coding sequence targets of miRNAs showed a delayed rise at opisthokonta phylostratum, compared to the 3' and 5' UTR targets of miRNAs. LncRNA regulatory network was the latest to evolve at eumetazoa.

  3. Interpreting locomotor biomechanics from the morphology of human footprints.

    Science.gov (United States)

    Hatala, Kevin G; Wunderlich, Roshna E; Dingwall, Heather L; Richmond, Brian G

    2016-01-01

    Fossil hominin footprints offer unique direct windows to the locomotor behaviors of our ancestors. These data could allow a clearer understanding of the evolution of human locomotion by circumventing issues associated with indirect interpretations of habitual locomotor patterns from fossil skeletal material. However, before we can use fossil hominin footprints to understand better the evolution of human locomotion, we must first develop an understanding of how locomotor biomechanics are preserved in, and can be inferred from, footprint morphologies. In this experimental study, 41 habitually barefoot modern humans created footprints under controlled conditions in which variables related to locomotor biomechanics could be quantified. Measurements of regional topography (depth) were taken from 3D models of those footprints, and principal components analysis was used to identify orthogonal axes that described the largest proportions of topographic variance within the human experimental sample. Linear mixed effects models were used to quantify the influences of biomechanical variables on the first five principal axes of footprint topographic variation, thus providing new information on the biomechanical variables most evidently expressed in the morphology of human footprints. The footprint's overall depth was considered as a confounding variable, since biomechanics may be linked to the extent to which a substrate deforms. Three of five axes showed statistically significant relationships with variables related to both locomotor biomechanics and substrate displacement; one axis was influenced only by biomechanics and another only by the overall depth of the footprint. Principal axes of footprint morphological variation were significantly related to gait type (walking or running), kinematics of the hip and ankle joints and the distribution of pressure beneath the foot. These results provide the first quantitative framework for developing hypotheses regarding the

  4. Comprehensive Reconstruction and Visualization of Non-Coding Regulatory Networks in Human

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape. PMID:25540777

  5. Comprehensive reconstruction and visualization of non-coding regulatory networks in human.

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  6. Irradiations of human melanoma cells by 14 MeV neutrons; survival curves interpretation; physical simulation of neutrons interactions in the cellular medium

    International Nuclear Information System (INIS)

    Bodez, Veronique

    2000-01-01

    14 MeV neutrons are used to irradiate human melanoma cells in order to study survival curves at low dose and low dose rate. We have simulated with the MCNP code, transport of neutrons through the experimental setup to evaluate the contamination of the primary beam by gamma and electrons, for the feasibility of our experiments. We have shown a rapid decrease of the survival curve in the first cGy followed by a plateau for doses up to 30 cGy; after we observed an exponential decrease. This results are observed for the first time, for neutrons at low dose rate (5 cGy/h). In parallel with this experimental point, we have developed a simulation code which permitted the study of neutrons interactions with the cellular medium for individual cells defined as in our experimental conditions. We show that most of the energy is deposited by protons from neutron interactions with external medium, and by heavy ions for interactions into the cell. On the other hand the code gives a good order of magnitude of the dose rate, compared to the experimental values given by silicon diodes. The first results show that we can, using a theory based on induced repair of cells, give an interpretation of the observed experimental plateau. We can give an estimation of the radial distribution of dose for the tracks of charged ions, we show the possibility of calculate interaction cross sections with cellular organelles. Such a work gives interesting perspectives for the future in radiobiology, radiotherapy or radioprotection. (author) [fr

  7. Some Algebraic Aspects of MorseCode Sequences

    OpenAIRE

    Johann Cigler

    2003-01-01

    Morse code sequences are very useful to give combinatorial interpretations of various properties of Fibonacci numbers. In this note we study some algebraic and combinatorial aspects of Morse code sequences and obtain several q-analogues of Fibonacci numbers and Fibonacci polynomials and their generalizations.

  8. An integrative approach to predicting the functional effects of small indels in non-coding regions of the human genome.

    Science.gov (United States)

    Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2017-10-06

    Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.

  9. Some Algebraic Aspects of MorseCode Sequences

    Directory of Open Access Journals (Sweden)

    Johann Cigler

    2003-06-01

    Full Text Available Morse code sequences are very useful to give combinatorial interpretations of various properties of Fibonacci numbers. In this note we study some algebraic and combinatorial aspects of Morse code sequences and obtain several q-analogues of Fibonacci numbers and Fibonacci polynomials and their generalizations.

  10. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    Science.gov (United States)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    simulations. The other is that the level of electric field fluctuations scales as 1/ΛPIC ∝ p. We provide a corresponding exact expression, taking into account the finite superparticle size. We confirm both expectations with simulations. Fourth, we compare the Vlasov-Maxwell theory, often used for code benchmarking, to the PIC model. The former describes a phase-space fluid with Λ = + ∞ and no correlations, while the PIC plasma features a small Λ and a high level of correlations when compared to a real plasma. These differences have to be kept in mind when interpreting and validating PIC results against the Vlasov-Maxwell theory and when modeling real physical plasmas.

  11. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  12. Radioactive action code

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    A new coding system, 'Hazrad', for buildings and transportation containers for alerting emergency services personnel to the presence of radioactive materials has been developed in the United Kingdom. The hazards of materials in the buildings or transport container, together with the recommended emergency action, are represented by a number of codes which are marked on the building or container and interpreted from a chart carried as a pocket-size guide. Buildings would be marked with the familiar yellow 'radioactive' trefoil, the written information 'Radioactive materials' and a list of isotopes. Under this the 'Hazrad' code would be written - three symbols to denote the relative radioactive risk (low, medium or high), the biological risk (also low, medium or high) and the third showing the type of radiation emitted, alpha, beta or gamma. The response cards indicate appropriate measures to take, eg for a high biological risk, Bio3, the wearing of a gas-tight protection suit is advised. The code and its uses are explained. (U.K.)

  13. Pervasive hitchhiking at coding and regulatory sites in humans.

    Directory of Open Access Journals (Sweden)

    James J Cai

    2009-01-01

    Full Text Available Much effort and interest have focused on assessing the importance of natural selection, particularly positive natural selection, in shaping the human genome. Although scans for positive selection have identified candidate loci that may be associated with positive selection in humans, such scans do not indicate whether adaptation is frequent in general in humans. Studies based on the reasoning of the MacDonald-Kreitman test, which, in principle, can be used to evaluate the extent of positive selection, suggested that adaptation is detectable in the human genome but that it is less common than in Drosophila or Escherichia coli. Both positive and purifying natural selection at functional sites should affect levels and patterns of polymorphism at linked nonfunctional sites. Here, we search for these effects by analyzing patterns of neutral polymorphism in humans in relation to the rates of recombination, functional density, and functional divergence with chimpanzees. We find that the levels of neutral polymorphism are lower in the regions of lower recombination and in the regions of higher functional density or divergence. These correlations persist after controlling for the variation in GC content, density of simple repeats, selective constraint, mutation rate, and depth of sequencing coverage. We argue that these results are most plausibly explained by the effects of natural selection at functional sites -- either recurrent selective sweeps or background selection -- on the levels of linked neutral polymorphism. Natural selection at both coding and regulatory sites appears to affect linked neutral polymorphism, reducing neutral polymorphism by 6% genome-wide and by 11% in the gene-rich half of the human genome. These findings suggest that the effects of natural selection at linked sites cannot be ignored in the study of neutral human polymorphism.

  14. Natural selection on protein-coding genes in the human genome

    DEFF Research Database (Denmark)

    Bustamente, Carlos D.; Fledel-Alon, Adi; Williamson, Scott

    2005-01-01

    , showing an excess of deleterious variation within local populations 9, 10 . Here we contrast patterns of coding sequence polymorphism identified by direct sequencing of 39 humans for over 11,000 genes to divergence between humans and chimpanzees, and find strong evidence that natural selection has shaped......Comparisons of DNA polymorphism within species to divergence between species enables the discovery of molecular adaptation in evolutionarily constrained genes as well as the differentiation of weak from strong purifying selection 1, 2, 3, 4 . The extent to which weak negative and positive darwinian...... selection have driven the molecular evolution of different species varies greatly 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 , with some species, such as Drosophila melanogaster, showing strong evidence of pervasive positive selection 6, 7, 8, 9 , and others, such as the selfing weed Arabidopsis thaliana...

  15. LncRNAWiki: harnessing community knowledge in collaborative curation of human long non-coding RNAs

    KAUST Repository

    Ma, L.; Li, A.; Zou, D.; Xu, X.; Xia, L.; Yu, J.; Bajic, Vladimir B.; Zhang, Z.

    2014-01-01

    Long non-coding RNAs (lncRNAs) perform a diversity of functions in numerous important biological processes and are implicated in many human diseases. In this report we present lncRNAWiki (http://lncrna.big.ac.cn), a wiki-based platform that is open

  16. User's manual for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the User's Manual for the TMAD code system, which includes the TMAD code and the LIBMAKR code. The TMAD code was commissioned to make it easier to interpret moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, also can be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code then will interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data

  17. REQUIREMENTS FOR A GENERAL INTERPRETATION THEORY

    Directory of Open Access Journals (Sweden)

    Anda Laura Lungu Petruescu

    2013-06-01

    Full Text Available Time has proved that Economic Analysis is not enough as to ensure all the needs of the economic field. The present study wishes to propose a new approach method of the economic phenomena and processes based on the researches made outside the economic space- a new general interpretation theory- which is centered on the human being as the basic actor of economy. A general interpretation theory must assure the interpretation of the causalities among the economic phenomena and processes- causal interpretation; the interpretation of the correlations and dependencies among indicators- normative interpretation; the interpretation of social and communicational processes in economic organizations- social and communicational interpretation; the interpretation of the community status of companies- transsocial interpretation; the interpretation of the purposes of human activities and their coherency – teleological interpretation; the interpretation of equilibrium/ disequilibrium from inside the economic systems- optimality interpretation. In order to respond to such demands, rigor, pragmatism, praxiology and contextual connectors are required. In order to progress, the economic science must improve its language, both its syntax and its semantics. The clarity of exposure requires a language clarity and the scientific theory progress asks for the need of hypotheses in the building of the theories. The switch from the common language to the symbolic one means the switch from ambiguity to rigor and rationality, that is order in thinking. But order implies structure, which implies formalization. Our paper should be a plea for these requirements, requirements which should be fulfilled by a modern interpretation theory.

  18. Advanced human machine interaction for an image interpretation workstation

    Science.gov (United States)

    Maier, S.; Martin, M.; van de Camp, F.; Peinsipp-Byma, E.; Beyerer, J.

    2016-05-01

    In recent years, many new interaction technologies have been developed that enhance the usability of computer systems and allow for novel types of interaction. The areas of application for these technologies have mostly been in gaming and entertainment. However, in professional environments, there are especially demanding tasks that would greatly benefit from improved human machine interfaces as well as an overall improved user experience. We, therefore, envisioned and built an image-interpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a complex software product such as a geo-information system to provide geographic context, an image annotation tool, software to generate standardized reports and a tool to aid in the identification of objects. Using self-developed systems for hand tracking, pointing gestures and head pose estimation in addition to touchscreens, face identification, and speech recognition systems we created a novel approach to this complex task. For example, head pose information is used to save the position of the mouse cursor on the currently focused screen and to restore it as soon as the same screen is focused again while hand gestures allow for intuitive manipulation of 3d objects in mid-air. While the primary focus is on the task of image interpretation, all of the technologies involved provide generic ways of efficiently interacting with a multi-screen setup and could be utilized in other fields as well. In preliminary experiments, we received promising feedback from users in the military and started to tailor the functionality to their needs

  19. Synthesis of clad motion experiments interpretation: codes and validation

    International Nuclear Information System (INIS)

    Papin, J.; Fortunato, M.; Seiler, J.M.

    1983-04-01

    This communication deals with clad melting and relocation phenomena related to LMFBR safety analysis of loss of flow accidents. We present: - the physical models developed at DSN/CEN Cadarache in single channel and bundle geometry. The interpretation with these models of experiments performed by the STT (CEN Grenoble). It comes out that we have now obtained a good understanding of the involved phenomena in single channel geometry. On the other hand, further studies are necessary for a better knowledge of clad motion phenomena in bundle cases with conditions close to reactor ones

  20. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  1. Quantum computation with Turaev-Viro codes

    International Nuclear Information System (INIS)

    Koenig, Robert; Kuperberg, Greg; Reichardt, Ben W.

    2010-01-01

    For a 3-manifold with triangulated boundary, the Turaev-Viro topological invariant can be interpreted as a quantum error-correcting code. The code has local stabilizers, identified by Levin and Wen, on a qudit lattice. Kitaev's toric code arises as a special case. The toric code corresponds to an abelian anyon model, and therefore requires out-of-code operations to obtain universal quantum computation. In contrast, for many categories, such as the Fibonacci category, the Turaev-Viro code realizes a non-abelian anyon model. A universal set of fault-tolerant operations can be implemented by deforming the code with local gates, in order to implement anyon braiding. We identify the anyons in the code space, and present schemes for initialization, computation and measurement. This provides a family of constructions for fault-tolerant quantum computation that are closely related to topological quantum computation, but for which the fault tolerance is implemented in software rather than coming from a physical medium.

  2. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  3. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  4. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  5. Epigenomic annotation-based interpretation of genomic data: from enrichment analysis to machine learning.

    Science.gov (United States)

    Dozmorov, Mikhail G

    2017-10-15

    One of the goals of functional genomics is to understand the regulatory implications of experimentally obtained genomic regions of interest (ROIs). Most sequencing technologies now generate ROIs distributed across the whole genome. The interpretation of these genome-wide ROIs represents a challenge as the majority of them lie outside of functionally well-defined protein coding regions. Recent efforts by the members of the International Human Epigenome Consortium have generated volumes of functional/regulatory data (reference epigenomic datasets), effectively annotating the genome with epigenomic properties. Consequently, a wide variety of computational tools has been developed utilizing these epigenomic datasets for the interpretation of genomic data. The purpose of this review is to provide a structured overview of practical solutions for the interpretation of ROIs with the help of epigenomic data. Starting with epigenomic enrichment analysis, we discuss leading tools and machine learning methods utilizing epigenomic and 3D genome structure data. The hierarchy of tools and methods reviewed here presents a practical guide for the interpretation of genome-wide ROIs within an epigenomic context. mikhail.dozmorov@vcuhealth.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  7. A Theoretical Foundation for Tilden's Interpretive Principles.

    Science.gov (United States)

    Hammitt, William E.

    1981-01-01

    Draws from perceptual and cognitive psychology to present a theoretical basis for the principles of interpretation developed by Freeman Tilden. Emphasized is cognitive map theory which holds that information units people receive, code and store are structured into cognitive models intended to represent the environment. (Author/WB)

  8. Consensus coding sequence (CCDS) database: a standardized set of human and mouse protein-coding regions supported by expert curation.

    Science.gov (United States)

    Pujar, Shashikant; O'Leary, Nuala A; Farrell, Catherine M; Loveland, Jane E; Mudge, Jonathan M; Wallin, Craig; Girón, Carlos G; Diekhans, Mark; Barnes, If; Bennett, Ruth; Berry, Andrew E; Cox, Eric; Davidson, Claire; Goldfarb, Tamara; Gonzalez, Jose M; Hunt, Toby; Jackson, John; Joardar, Vinita; Kay, Mike P; Kodali, Vamsi K; Martin, Fergal J; McAndrews, Monica; McGarvey, Kelly M; Murphy, Michael; Rajput, Bhanu; Rangwala, Sanjida H; Riddick, Lillian D; Seal, Ruth L; Suner, Marie-Marthe; Webb, David; Zhu, Sophia; Aken, Bronwen L; Bruford, Elspeth A; Bult, Carol J; Frankish, Adam; Murphy, Terence; Pruitt, Kim D

    2018-01-04

    The Consensus Coding Sequence (CCDS) project provides a dataset of protein-coding regions that are identically annotated on the human and mouse reference genome assembly in genome annotations produced independently by NCBI and the Ensembl group at EMBL-EBI. This dataset is the product of an international collaboration that includes NCBI, Ensembl, HUGO Gene Nomenclature Committee, Mouse Genome Informatics and University of California, Santa Cruz. Identically annotated coding regions, which are generated using an automated pipeline and pass multiple quality assurance checks, are assigned a stable and tracked identifier (CCDS ID). Additionally, coordinated manual review by expert curators from the CCDS collaboration helps in maintaining the integrity and high quality of the dataset. The CCDS data are available through an interactive web page (https://www.ncbi.nlm.nih.gov/CCDS/CcdsBrowse.cgi) and an FTP site (ftp://ftp.ncbi.nlm.nih.gov/pub/CCDS/). In this paper, we outline the ongoing work, growth and stability of the CCDS dataset and provide updates on new collaboration members and new features added to the CCDS user interface. We also present expert curation scenarios, with specific examples highlighting the importance of an accurate reference genome assembly and the crucial role played by input from the research community. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  9. Global Intersection of Long Non-Coding RNAs with Processed and Unprocessed Pseudogenes in the Human Genome

    Directory of Open Access Journals (Sweden)

    Michael John Milligan

    2016-03-01

    Full Text Available Pseudogenes are abundant in the human genome and had long been thought of purely as nonfunctional gene fossils. Recent observations point to a role for pseudogenes in regulating genes transcriptionally and post-transcriptionally in human cells. To computationally interrogate the network space of integrated pseudogene and long non-coding RNA regulation in the human transcriptome, we developed and implemented an algorithm to identify all long non-coding RNA (lncRNA transcripts that overlap the genomic spans, and specifically the exons, of any human pseudogenes in either sense or antisense orientation. As inputs to our algorithm, we imported three public repositories of pseudogenes: GENCODE v17 (processed and unprocessed, Ensembl 72; Retroposed Pseudogenes V5 (processed only and Yale Pseudo60 (processed and unprocessed, Ensembl 60; two public lncRNA catalogs: Broad Institute, GENCODE v17; NCBI annotated piRNAs; and NHGRI clinical variants. The data sets were retrieved from the UCSC Genome Database using the UCSC Table Browser. We identified 2277 loci containing exon-to-exon overlaps between pseudogenes, both processed and unprocessed, and long non-coding RNA genes. Of these loci we identified 1167 with Genbank EST and full-length cDNA support providing direct evidence of transcription on one or both strands with exon-to-exon overlaps. The analysis converged on 313 pseudogene-lncRNA exon-to-exon overlaps that were bidirectionally supported by both full-length cDNAs and ESTs. In the process of identifying transcribed pseudogenes, we generated a comprehensive, positionally non-redundant encyclopedia of human pseudogenes, drawing upon multiple, and formerly disparate public pseudogene repositories. Collectively, these observations suggest that pseudogenes are pervasively transcribed on both strands and are common drivers of gene regulation.

  10. International filiation in the new Civil and Commercial Code of Argentina

    Directory of Open Access Journals (Sweden)

    Luciana BEATRIZ SCOTTI

    2015-07-01

    Full Text Available The Argentine Private International Law omitted from yesteryear, legislate on international filiation, both in relation to jurisdiction and applicable law in matters of contesting paternity, establishment and recognition of parentage as regards the extraterritorial recognition of parental filiation. Indeed, given the particularities of this issue, the responses from our discipline were delayed. The Argentina did not have regulation of internal source, and conventional rules are scarce and inadequate for today.In comparative law and in a wide sector of the national and foreign doctrine, the tendency is to adopt a connecting element focused on the child: his domicile or habitual residence, with some nuances of different interpretation and potential accumulation of laws. Also on jurisdiction, we note a clear orientation towards the opening of forums available.In the present work, we try to provide some guidelines for coding this sensitive issue, which involves primarily the human rights of children, with special consideration the provisions of the recently adopted Civil and Commercial Code of Argentina, and the current context in which the techniques of assisted human reproduction claim a starring role, with serious and concrete effects on Private International Law.

  11. Theoretical interpretation of the D-COM blind problem using the URANUS code

    International Nuclear Information System (INIS)

    Lassmann, K.; Preusser, T.

    1984-01-01

    A description of the URANUS code with emphasise on the sub-models of gas release, swelling and grain growth is given. The recently developed steady state and transient gas release model URGAS is briefly outlined. The URANUS code was used to analyse a blind problem on fission gas release which was defined within the framework of an IAEA sponsored coordinated research programme for ''The Development of Computer Models for Fuel Element Behaviour in Water Reactors (D-COM)''. The blind predictions and the predictions made after 15 April 1983 are compared. Sensitivity studies are presented which include the results obtained with different gas release models as well as the effects of varying the input data in some selected cases. (author)

  12. Annotating pathogenic non-coding variants in genic regions.

    Science.gov (United States)

    Gelfman, Sahar; Wang, Quanli; McSweeney, K Melodi; Ren, Zhong; La Carpia, Francesca; Halvorsen, Matt; Schoch, Kelly; Ratzon, Fanni; Heinzen, Erin L; Boland, Michael J; Petrovski, Slavé; Goldstein, David B

    2017-08-09

    Identifying the underlying causes of disease requires accurate interpretation of genetic variants. Current methods ineffectively capture pathogenic non-coding variants in genic regions, resulting in overlooking synonymous and intronic variants when searching for disease risk. Here we present the Transcript-inferred Pathogenicity (TraP) score, which uses sequence context alterations to reliably identify non-coding variation that causes disease. High TraP scores single out extremely rare variants with lower minor allele frequencies than missense variants. TraP accurately distinguishes known pathogenic and benign variants in synonymous (AUC = 0.88) and intronic (AUC = 0.83) public datasets, dismissing benign variants with exceptionally high specificity. TraP analysis of 843 exomes from epilepsy family trios identifies synonymous variants in known epilepsy genes, thus pinpointing risk factors of disease from non-coding sequence data. TraP outperforms leading methods in identifying non-coding variants that are pathogenic and is therefore a valuable tool for use in gene discovery and the interpretation of personal genomes.While non-coding synonymous and intronic variants are often not under strong selective constraint, they can be pathogenic through affecting splicing or transcription. Here, the authors develop a score that uses sequence context alterations to predict pathogenicity of synonymous and non-coding genetic variants, and provide a web server of pre-computed scores.

  13. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  14. Two human cDNA molecules coding for the Duchenne muscular dystrophy (DMD) locus are highly homologous

    Energy Technology Data Exchange (ETDEWEB)

    Rosenthal, A.; Speer, A.; Billwitz, H. (Zentralinstitut fuer Molekularbiologie, Berlin-Buch (Germany Democratic Republic)); Cross, G.S.; Forrest, S.M.; Davies, K.E. (Univ. of Oxford (England))

    1989-07-11

    Recently the complete sequence of the human fetal cDNA coding for the Duchenne muscular dystrophy (DMD) locus was reported and a 3,685 amino acid long, rod-shaped cytoskeletal protein (dystrophin) was predicted as the protein product. Independently, the authors have isolated and sequenced different DMD cDNA molecules from human adult and fetal muscle. The complete 12.5 kb long sequence of all their cDNA clones has now been determined and they report here the nucleotide (nt) and amino acid (aa) differences between the sequences of both groups. The cDNA sequence comprises the whole coding region but lacks the first 110 nt from the 5{prime}-untranslated region and the last 1,417 nt of the 3{prime}-untranslated region. They have found 11 nt differences (approximately 99.9% homology) from which 7 occurred at the aa level.

  15. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  16. Hermeneutics in the interpretation in accordance of human rights in the Mexican legal order

    Directory of Open Access Journals (Sweden)

    Carlos E. Massé Narváez

    2014-05-01

    Full Text Available We proceed to introduce the reader to the expression: interpretation in accordance with then, perform a conceptualization of the concepts to use in the analysis of the new content in the field of human rights. Subsequently we are dealing with the legal materials and interpretation in accordance with base hermeneutics, turning to the competition in the interpretation correct. Finally there is the need for training hermeneutics, thereby terminating the paper. Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

  17. The prognostic potential and carcinogenesis of long non-coding RNA TUG1 in human cholangiocarcinoma

    OpenAIRE

    Xu, Yi; Leng, Kaiming; Li, Zhenglong; Zhang, Fumin; Zhong, Xiangyu; Kang, Pengcheng; Jiang, Xingming; Cui, Yunfu

    2017-01-01

    Cholangiocarcinoma (CCA) is a fatal disease with increasing worldwide incidence and is characterized by poor prognosis due to its poor response to conventional chemotherapy or radiotherapy. Long non-coding RNAs (lncRNAs) play key roles in multiple human cancers, including CCA. Cancer progression related lncRNA taurine-up-regulated gene 1 (TUG1) was reported to be involved in human carcinomas. However, the impact of TUG1 in CCA is unclear. The aim of this study was to explore the expression pa...

  18. User Effect on Code Application and Qualification Needs

    International Nuclear Information System (INIS)

    D'Auria, F.; Salah, A.B.

    2008-01-01

    Experience with some code assessment case studies and also additional ISPs have shown the dominant effect of the code user on the predicted system behavior. The general findings of the user effect investigations on some of the case studies indicate, specifically, that in addition to user effects, there are other reasons which affect the results of the calculations and are hidden under the general title of user effects. The specific characteristics of experimental facilities, i.e. limitations as far as code assessment is concerned; limitations of the used thermal-hydraulic codes to simulate certain system behavior or phenomena; limitations due to interpretation of experimental data by the code user, i.e. interpretation of experimental data base. On the basis of the discussions in this paper, the following conclusions and recommendations can be made: More dialogue appears to be necessary with the experimenters in the planning of code assessment calculations, e.g. ISPs.; User guidelines are not complete for the codes and the lack of sufficient and detailed user guidelines are observed with some of the case studies; More extensive user instruction and training, improved user guidelines, or quality assurance procedures may partially reduce some of the subjective user influence on the calculated results; The discrepancies between experimental data and code predictions are due both to the intrinsic code limit and to the so called user effects. There is a worthful need to quantify the percentage of disagreement due to the poor utilization of the code and due to the code itself. This need especially arises for the uncertainty evaluation studies (e.g. [18]) which do not take into account the mentioned user effects; A much focused investigation, based on the results of comparison calculations e.g. ISPs, analyzing the experimental data and the results of the specific code in order to evaluate the user effects and the related experimental aspects should be integral part of the

  19. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    International Nuclear Information System (INIS)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-01-01

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE

  20. Hominoid-specific de novo protein-coding genes originating from long non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Chen Xie

    2012-09-01

    Full Text Available Tinkering with pre-existing genes has long been known as a major way to create new genes. Recently, however, motherless protein-coding genes have been found to have emerged de novo from ancestral non-coding DNAs. How these genes originated is not well addressed to date. Here we identified 24 hominoid-specific de novo protein-coding genes with precise origination timing in vertebrate phylogeny. Strand-specific RNA-Seq analyses were performed in five rhesus macaque tissues (liver, prefrontal cortex, skeletal muscle, adipose, and testis, which were then integrated with public transcriptome data from human, chimpanzee, and rhesus macaque. On the basis of comparing the RNA expression profiles in the three species, we found that most of the hominoid-specific de novo protein-coding genes encoded polyadenylated non-coding RNAs in rhesus macaque or chimpanzee with a similar transcript structure and correlated tissue expression profile. According to the rule of parsimony, the majority of these hominoid-specific de novo protein-coding genes appear to have acquired a regulated transcript structure and expression profile before acquiring coding potential. Interestingly, although the expression profile was largely correlated, the coding genes in human often showed higher transcriptional abundance than their non-coding counterparts in rhesus macaque. The major findings we report in this manuscript are robust and insensitive to the parameters used in the identification and analysis of de novo genes. Our results suggest that at least a portion of long non-coding RNAs, especially those with active and regulated transcription, may serve as a birth pool for protein-coding genes, which are then further optimized at the transcriptional level.

  1. Code on the safety of civilian nuclear fuel cycle installations

    International Nuclear Information System (INIS)

    1996-01-01

    The 'Code' was promulgated by the National Nuclear Safety Administration (NSSA) on June 17, 1993, which is applicable to civilian nuclear fuel fabrication, processing, storage and reprocessing installations, not including the safety requirements for the use of nuclear fuel in reactors. The contents of the 'Code' involve siting, design, construction, commissioning, operation and decommissioning of fuel cycle installation. The NNSA shall be responsible for the interpretation of this 'Code'

  2. wKinMut-2: Identification and Interpretation of Pathogenic Variants in Human Protein Kinases

    DEFF Research Database (Denmark)

    Vazquez, Miguel; Pons, Tirso; Brunak, Søren

    2016-01-01

    forest approach. To understand the biological mechanisms causative of human diseases and cancer, information from pertinent reference knowledgebases and the literature is automatically mined, digested and homogenized. Variants are visualized in their structural contexts and residues affecting catalytic...... is often scattered across different sources, which makes the integrative analysis complex and laborious. wKinMut-2 constitutes a solution to facilitate the interpretation of the consequences of human protein kinase variation. Nine methods predict their pathogenicity, including a kinase-specific random...... and drug-binding are identified. Known protein-protein interactions are reported. Altogether, this information is intended to assist the generation of new working hypothesis to be corroborated with ulterior experimental work. The wKinMut-2 system, along with a user manual and examples is freely accessible...

  3. The missing evaluation codes from order domain theory

    DEFF Research Database (Denmark)

    Andersen, Henning Ejnar; Geil, Olav

    The Feng-Rao bound gives a lower bound on the minimum distance of codes defined by means of their parity check matrices. From the Feng-Rao bound it is clear how to improve a large family of codes by leaving out certain rows in their parity check matrices. In this paper we derive a simple lower...... generalized Hamming weight. We interpret our methods into the setting of order domain theory. In this way we fill in an obvious gap in the theory of order domains. The improved codes from the present paper are not in general equal to the Feng-Rao improved codes but the constructions are very much related....

  4. Analysis of 6,515 exomes reveals the recent origin of most human protein-coding variants.

    Science.gov (United States)

    Fu, Wenqing; O'Connor, Timothy D; Jun, Goo; Kang, Hyun Min; Abecasis, Goncalo; Leal, Suzanne M; Gabriel, Stacey; Rieder, Mark J; Altshuler, David; Shendure, Jay; Nickerson, Deborah A; Bamshad, Michael J; Akey, Joshua M

    2013-01-10

    Establishing the age of each mutation segregating in contemporary human populations is important to fully understand our evolutionary history and will help to facilitate the development of new approaches for disease-gene discovery. Large-scale surveys of human genetic variation have reported signatures of recent explosive population growth, notable for an excess of rare genetic variants, suggesting that many mutations arose recently. To more quantitatively assess the distribution of mutation ages, we resequenced 15,336 genes in 6,515 individuals of European American and African American ancestry and inferred the age of 1,146,401 autosomal single nucleotide variants (SNVs). We estimate that approximately 73% of all protein-coding SNVs and approximately 86% of SNVs predicted to be deleterious arose in the past 5,000-10,000 years. The average age of deleterious SNVs varied significantly across molecular pathways, and disease genes contained a significantly higher proportion of recently arisen deleterious SNVs than other genes. Furthermore, European Americans had an excess of deleterious variants in essential and Mendelian disease genes compared to African Americans, consistent with weaker purifying selection due to the Out-of-Africa dispersal. Our results better delimit the historical details of human protein-coding variation, show the profound effect of recent human history on the burden of deleterious SNVs segregating in contemporary populations, and provide important practical information that can be used to prioritize variants in disease-gene discovery.

  5. The personification of animals: coding of human and nonhuman body parts based on posture and function.

    Science.gov (United States)

    Welsh, Timothy N; McDougall, Laura; Paulson, Stephanie

    2014-09-01

    The purpose of the present research was to determine how humans represent the bodies and limbs of nonhuman mammals based on anatomical and functional properties. To this end, participants completed a series of body-part compatibility tasks in which they responded with a thumb or foot response to the color of a stimulus (red or blue, respectively) presented on different limbs of several animals. Across the studies, this compatibility task was conducted with images of human and nonhuman animals (bears, cows, and monkeys) in bipedal or quadrupedal postures. The results revealed that the coding of the limbs of nonhuman animals is strongly influenced by the posture of the body, but not the functional capacity of the limb. Specifically, body-part compatibility effects were present for both human and nonhuman animals when the figures were in a bipedal posture, but were not present when the animals were in a quadrupedal stance (Experiments 1a-c). Experiments 2a and 2b revealed that the posture-based body-part compatibility effects were not simply a vertical spatial compatibility effect or due to a mismatch between the posture of the body in the image and the participant. These data indicate that nonhuman animals in a bipedal posture are coded with respect to the "human" body representation, whereas nonhuman animals in a quadrupedal posture are not mapped to the human body representation. Overall, these studies provide new insight into the processes through which humans understand, mimic, and learn from the actions of nonhuman animals. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Minimizing human error in radiopharmaceutical preparation and administration via a bar code-enhanced nuclear pharmacy management system.

    Science.gov (United States)

    Hakala, John L; Hung, Joseph C; Mosman, Elton A

    2012-09-01

    The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.

  7. Complete coding sequence of the human raf oncogene and the corresponding structure of the c-raf-1 gene

    Energy Technology Data Exchange (ETDEWEB)

    Bonner, T I; Oppermann, H; Seeburg, P; Kerby, S B; Gunnell, M A; Young, A C; Rapp, U R

    1986-01-24

    The complete 648 amino acid sequence of the human raf oncogene was deduced from the 2977 nucleotide sequence of a fetal liver cDNA. The cDNA has been used to obtain clones which extend the human c-raf-1 locus by an additional 18.9 kb at the 5' end and contain all the remaining coding exons.

  8. A high-resolution map of human evolutionary constraint using 29 mammals.

    Science.gov (United States)

    Lindblad-Toh, Kerstin; Garber, Manuel; Zuk, Or; Lin, Michael F; Parker, Brian J; Washietl, Stefan; Kheradpour, Pouya; Ernst, Jason; Jordan, Gregory; Mauceli, Evan; Ward, Lucas D; Lowe, Craig B; Holloway, Alisha K; Clamp, Michele; Gnerre, Sante; Alföldi, Jessica; Beal, Kathryn; Chang, Jean; Clawson, Hiram; Cuff, James; Di Palma, Federica; Fitzgerald, Stephen; Flicek, Paul; Guttman, Mitchell; Hubisz, Melissa J; Jaffe, David B; Jungreis, Irwin; Kent, W James; Kostka, Dennis; Lara, Marcia; Martins, Andre L; Massingham, Tim; Moltke, Ida; Raney, Brian J; Rasmussen, Matthew D; Robinson, Jim; Stark, Alexander; Vilella, Albert J; Wen, Jiayu; Xie, Xiaohui; Zody, Michael C; Baldwin, Jen; Bloom, Toby; Chin, Chee Whye; Heiman, Dave; Nicol, Robert; Nusbaum, Chad; Young, Sarah; Wilkinson, Jane; Worley, Kim C; Kovar, Christie L; Muzny, Donna M; Gibbs, Richard A; Cree, Andrew; Dihn, Huyen H; Fowler, Gerald; Jhangiani, Shalili; Joshi, Vandita; Lee, Sandra; Lewis, Lora R; Nazareth, Lynne V; Okwuonu, Geoffrey; Santibanez, Jireh; Warren, Wesley C; Mardis, Elaine R; Weinstock, George M; Wilson, Richard K; Delehaunty, Kim; Dooling, David; Fronik, Catrina; Fulton, Lucinda; Fulton, Bob; Graves, Tina; Minx, Patrick; Sodergren, Erica; Birney, Ewan; Margulies, Elliott H; Herrero, Javier; Green, Eric D; Haussler, David; Siepel, Adam; Goldman, Nick; Pollard, Katherine S; Pedersen, Jakob S; Lander, Eric S; Kellis, Manolis

    2011-10-12

    The comparison of related genomes has emerged as a powerful lens for genome interpretation. Here we report the sequencing and comparative analysis of 29 eutherian genomes. We confirm that at least 5.5% of the human genome has undergone purifying selection, and locate constrained elements covering ∼4.2% of the genome. We use evolutionary signatures and comparisons with experimental data sets to suggest candidate functions for ∼60% of constrained bases. These elements reveal a small number of new coding exons, candidate stop codon readthrough events and over 10,000 regions of overlapping synonymous constraint within protein-coding exons. We find 220 candidate RNA structural families, and nearly a million elements overlapping potential promoter, enhancer and insulator regions. We report specific amino acid residues that have undergone positive selection, 280,000 non-coding elements exapted from mobile elements and more than 1,000 primate- and human-accelerated elements. Overlap with disease-associated variants indicates that our findings will be relevant for studies of human biology, health and disease.

  9. Coding sequence of human rho cDNAs clone 6 and clone 9

    Energy Technology Data Exchange (ETDEWEB)

    Chardin, P; Madaule, P; Tavitian, A

    1988-03-25

    The authors have isolated human cDNAs including the complete coding sequence for two rho proteins corresponding to the incomplete isolates previously described as clone 6 and clone 9. The deduced a.a. sequences, when compared to the a.a. sequence deduced from clone 12 cDNA, show that there are in human at least three highly homologous rho genes. They suggest that clone 12 be named rhoA, clone 6 : rhoB and clone 9 : rhoC. RhoA, B and C proteins display approx. 30% a.a. identity with ras proteins,. mainly clustered in four highly homologous internal regions corresponding to the GTP binding site; however at least one significant difference is found; the 3 rho proteins have an Alanine in position corresponding to ras Glycine 13, suggesting that rho and ras proteins might have slightly different biochemical properties.

  10. Identification of Novel Long Non-coding and Circular RNAs in Human Papillomavirus-Mediated Cervical Cancer

    Directory of Open Access Journals (Sweden)

    Hongbo Wang

    2017-09-01

    Full Text Available Cervical cancer is the third most common cancer worldwide and the fourth leading cause of cancer-associated mortality in women. Accumulating evidence indicates that long non-coding RNAs (lncRNAs and circular RNAs (circRNAs may play key roles in the carcinogenesis of different cancers; however, little is known about the mechanisms of lncRNAs and circRNAs in the progression and metastasis of cervical cancer. In this study, we explored the expression profiles of lncRNAs, circRNAs, miRNAs, and mRNAs in HPV16 (human papillomavirus genotype 16 mediated cervical squamous cell carcinoma and matched adjacent non-tumor (ATN tissues from three patients with high-throughput RNA sequencing (RNA-seq. In total, we identified 19 lncRNAs, 99 circRNAs, 28 miRNAs, and 304 mRNAs that were commonly differentially expressed (DE in different patients. Among the non-coding RNAs, 3 lncRNAs and 44 circRNAs are novel to our knowledge. Functional enrichment analysis showed that DE lncRNAs, miRNAs, and mRNAs were enriched in pathways crucial to cancer as well as other gene ontology (GO terms. Furthermore, the co-expression network and function prediction suggested that all 19 DE lncRNAs could play different roles in the carcinogenesis and development of cervical cancer. The competing endogenous RNA (ceRNA network based on DE coding and non-coding RNAs showed that each miRNA targeted a number of lncRNAs and circRNAs. The link between part of the miRNAs in the network and cervical cancer has been validated in previous studies, and these miRNAs targeted the majority of the novel non-coding RNAs, thus suggesting that these novel non-coding RNAs may be involved in cervical cancer. Taken together, our study shows that DE non-coding RNAs could be further developed as diagnostic and therapeutic biomarkers of cervical cancer. The complex ceRNA network also lays the foundation for future research of the roles of coding and non-coding RNAs in cervical cancer.

  11. Identification of Novel Long Non-coding and Circular RNAs in Human Papillomavirus-Mediated Cervical Cancer

    Science.gov (United States)

    Wang, Hongbo; Zhao, Yingchao; Chen, Mingyue; Cui, Jie

    2017-01-01

    Cervical cancer is the third most common cancer worldwide and the fourth leading cause of cancer-associated mortality in women. Accumulating evidence indicates that long non-coding RNAs (lncRNAs) and circular RNAs (circRNAs) may play key roles in the carcinogenesis of different cancers; however, little is known about the mechanisms of lncRNAs and circRNAs in the progression and metastasis of cervical cancer. In this study, we explored the expression profiles of lncRNAs, circRNAs, miRNAs, and mRNAs in HPV16 (human papillomavirus genotype 16) mediated cervical squamous cell carcinoma and matched adjacent non-tumor (ATN) tissues from three patients with high-throughput RNA sequencing (RNA-seq). In total, we identified 19 lncRNAs, 99 circRNAs, 28 miRNAs, and 304 mRNAs that were commonly differentially expressed (DE) in different patients. Among the non-coding RNAs, 3 lncRNAs and 44 circRNAs are novel to our knowledge. Functional enrichment analysis showed that DE lncRNAs, miRNAs, and mRNAs were enriched in pathways crucial to cancer as well as other gene ontology (GO) terms. Furthermore, the co-expression network and function prediction suggested that all 19 DE lncRNAs could play different roles in the carcinogenesis and development of cervical cancer. The competing endogenous RNA (ceRNA) network based on DE coding and non-coding RNAs showed that each miRNA targeted a number of lncRNAs and circRNAs. The link between part of the miRNAs in the network and cervical cancer has been validated in previous studies, and these miRNAs targeted the majority of the novel non-coding RNAs, thus suggesting that these novel non-coding RNAs may be involved in cervical cancer. Taken together, our study shows that DE non-coding RNAs could be further developed as diagnostic and therapeutic biomarkers of cervical cancer. The complex ceRNA network also lays the foundation for future research of the roles of coding and non-coding RNAs in cervical cancer. PMID:28970820

  12. Isolation and characterization of full-length cDNA clones coding for cholinesterase from fetal human tissues

    International Nuclear Information System (INIS)

    Prody, C.A.; Zevin-Sonkin, D.; Gnatt, A.; Goldberg, O.; Soreq, H.

    1987-01-01

    To study the primary structure and regulation of human cholinesterases, oligodeoxynucleotide probes were prepared according to a consensus peptide sequence present in the active site of both human serum pseudocholinesterase and Torpedo electric organ true acetylcholinesterase. Using these probes, the authors isolated several cDNA clones from λgt10 libraries of fetal brain and liver origins. These include 2.4-kilobase cDNA clones that code for a polypeptide containing a putative signal peptide and the N-terminal, active site, and C-terminal peptides of human BtChoEase, suggesting that they code either for BtChoEase itself or for a very similar but distinct fetal form of cholinesterase. In RNA blots of poly(A) + RNA from the cholinesterase-producing fetal brain and liver, these cDNAs hybridized with a single 2.5-kilobase band. Blot hybridization to human genomic DNA revealed that these fetal BtChoEase cDNA clones hybridize with DNA fragments of the total length of 17.5 kilobases, and signal intensities indicated that these sequences are not present in many copies. Both the cDNA-encoded protein and its nucleotide sequence display striking homology to parallel sequences published for Torpedo AcChoEase. These finding demonstrate extensive homologies between the fetal BtChoEase encoded by these clones and other cholinesterases of various forms and species

  13. How People Interpret Healthy Eating: Contributions of Qualitative Research

    Science.gov (United States)

    Bisogni, Carole A.; Jastran, Margaret; Seligson, Marc; Thompson, Alyssa

    2012-01-01

    Objective: To identify how qualitative research has contributed to understanding the ways people in developed countries interpret healthy eating. Design: Bibliographic database searches identified reports of qualitative, empirical studies published in English, peer-reviewed journals since 1995. Data Analysis: Authors coded, discussed, recoded, and…

  14. Operational interpretations of quantum discord

    International Nuclear Information System (INIS)

    Cavalcanti, D.; Modi, K.; Aolita, L.; Boixo, S.; Piani, M.; Winter, A.

    2011-01-01

    Quantum discord quantifies nonclassical correlations beyond the standard classification of quantum states into entangled and unentangled. Although it has received considerable attention, it still lacks any precise interpretation in terms of some protocol in which quantum features are relevant. Here we give quantum discord its first information-theoretic operational meaning in terms of entanglement consumption in an extended quantum-state-merging protocol. We further relate the asymmetry of quantum discord with the performance imbalance in quantum state merging and dense coding.

  15. Coding of visual object features and feature conjunctions in the human brain.

    Science.gov (United States)

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2008-01-01

    Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  16. Interpretive Reporting of Protein Electrophoresis Data by Microcomputer

    Science.gov (United States)

    Talamo, Thomas S.; Losos, Frank J.; Kessler, G. Frederick

    1982-01-01

    A microcomputer based system for interpretive reporting of protein electrophoretic data has been developed. Data for serum, urine and cerebrospinal fluid protein electrophoreses as well as immunoelectrophoresis can be entered. Patient demographic information is entered through the keyboard followed by manual entry of total and fractionated protein levels obtained after densitometer scanning of the electrophoretic strip. The patterns are then coded, interpreted, and final reports generated. In most cases interpretation time is less than one second. Misinterpretation by computer is uncommon and can be corrected by edit functions within the system. These discrepancies between computer and pathologist interpretation are automatically stored in a data file for later review and possible program modification. Any or all previous tests on a patient may be reviewed with graphic display of the electrophoretic pattern. The system has been in use for several months and is presently well accepted by both laboratory and clinical staff. It also allows rapid storage, retrieval and analysis of protein electrophoretic datab.

  17. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  18. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  19. Gestalt descriptions embodiments and medical image interpretation

    DEFF Research Database (Denmark)

    Friis, Jan Kyrre Berg Olsen

    2017-01-01

    In this paper I will argue that medical specialists interpret and diagnose through technological mediations like X-ray and fMRI images, and by actualizing embodied skills tacitly they are determining the identity of objects in the perceptual field. The initial phase of human interpretation of vis...

  20. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  1. Probabilistic interpretation of data a physicist's approach

    CERN Document Server

    Miller, Guthrie

    2013-01-01

    This book is a physicists approach to interpretation of data using Markov Chain Monte Carlo (MCMC). The concepts are derived from first principles using a style of mathematics that quickly elucidates the basic ideas, sometimes with the aid of examples. Probabilistic data interpretation is a straightforward problem involving conditional probability. A prior probability distribution is essential, and examples are given. In this small book (200 pages) the reader is led from the most basic concepts of mathematical probability all the way to parallel processing algorithms for Markov Chain Monte Carlo. Fortran source code (for eigenvalue analysis of finite discrete Markov Chains, for MCMC, and for nonlinear least squares) is included with the supplementary material for this book (available online).

  2. Bayesian maximum posterior probability method for interpreting plutonium urinalysis data

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.

    1996-01-01

    A new internal dosimetry code for interpreting urinalysis data in terms of radionuclide intakes is described for the case of plutonium. The mathematical method is to maximise the Bayesian posterior probability using an entropy function as the prior probability distribution. A software package (MEMSYS) developed for image reconstruction is used. Some advantages of the new code are that it ensures positive calculated dose, it smooths out fluctuating data, and it provides an estimate of the propagated uncertainty in the calculated doses. (author)

  3. Interpretation and method: Empirical research methods and the interpretive turn, 2nd ed.

    NARCIS (Netherlands)

    Yanow, D.; Schwartz-Shea, P.

    2014-01-01

    This book demonstrates the relevance, rigor, and creativity of interpretive research methodologies for the social and human sciences. The book situates methods questions within the context of broader methodological questions--specifically, the character of social realities and their "know-ability."

  4. On the classification of long non-coding RNAs

    KAUST Repository

    Ma, Lina; Bajic, Vladimir B.; Zhang, Zhang

    2013-01-01

    Long non-coding RNAs (lncRNAs) have been found to perform various functions in a wide variety of important biological processes. To make easier interpretation of lncRNA functionality and conduct deep mining on these transcribed sequences

  5. A first interpretation of the Mol 7C/l and Mol 7C/2 experiments

    International Nuclear Information System (INIS)

    Berthier, J.; Carluec, B.; Fortunato, M.; Lemmet, L.

    1983-01-01

    The interpretation of the two experiments MOL 7C/1 and MOL 7C/2 that has been performed at the CEA of Cadarache is presented here: one will find first a recall of the experimental conditions of the MOL experiments, then a short description of the codes that enable the interpretation and finally the results of this interpretation compared to the experimental results

  6. Testing functional and morphological interpretations of enamel thickness along the deciduous tooth row in human children.

    OpenAIRE

    Mahoney, Patrick

    2013-01-01

    The significance of a gradient in enamel thickness along the human permanent molar row has been debated in the literature. Some attribute increased enamel thickness from first to third molars to greater bite force during chewing. Others argue that thicker third molar enamel relates to a smaller crown size facilitated by a reduced dentin component. Thus, differences in morphology, not function, explains enamel thickness. This study draws on these different interpretive models to assess enamel ...

  7. A methodology for interpretation of overcoring stress measurements in anisotropic rock

    International Nuclear Information System (INIS)

    Hakala, M.; Sjoeberg, J.

    2006-11-01

    The in situ state of stress is an important parameter for the design of a repository for final disposal of spent nuclear fuel. This report presents work conducted to improve the quality of overcoring stress measurements, focused on the interpretation of overcoring rock stress measurements when accounting for possible anisotropic behavior of the rock. The work comprised: (i) development/upgrading of a computer code for calculating stresses from overcoring strains for anisotropic materials and for a general overcoring probe configuration (up to six strain rosettes with six gauges each), (ii) development of a computer code for determining elastic constants for transversely isotropic rocks from biaxial testing, and (iii) analysis of case studies of selected overcoring measurements in both isotropic and anisotropic rocks from the Posiva and SKB sites in Finland and Sweden, respectively. The work was principally limited to transversely isotropic materials, although the stress calculation code is applicable also to orthotropic materials. The developed computer codes have been geared to work primarily with the Borre and CSIRO HI three-dimensional overcoring measurement probes. Application of the codes to selected case studies, showed that the developed tools were practical and useful for interpreting overcoring stress measurements conducted in anisotropic rock. A quantitative assessment of the effects of anisotropy may thus be obtained, which provides increased reliability in the stress data. Potential gaps in existing data and/or understanding can also be identified. (orig.)

  8. How American Nurses Association Code of Ethics informs genetic/genomic nursing.

    Science.gov (United States)

    Tluczek, Audrey; Twal, Marie E; Beamer, Laura Curr; Burton, Candace W; Darmofal, Leslie; Kracun, Mary; Zanni, Karen L; Turner, Martha

    2018-01-01

    Members of the Ethics and Public Policy Committee of the International Society of Nurses in Genetics prepared this article to assist nurses in interpreting the American Nurses Association (2015) Code of Ethics for Nurses with Interpretive Statements (Code) within the context of genetics/genomics. The Code explicates the nursing profession's norms and responsibilities in managing ethical issues. The nearly ubiquitous application of genetic/genomic technologies in healthcare poses unique ethical challenges for nursing. Therefore, authors conducted literature searches that drew from various professional resources to elucidate implications of the code in genetic/genomic nursing practice, education, research, and public policy. We contend that the revised Code coupled with the application of genomic technologies to healthcare creates moral obligations for nurses to continually refresh their knowledge and capacities to translate genetic/genomic research into evidence-based practice, assure the ethical conduct of scientific inquiry, and continually develop or revise national/international guidelines that protect the rights of individuals and populations within the context of genetics/genomics. Thus, nurses have an ethical responsibility to remain knowledgeable about advances in genetics/genomics and incorporate emergent evidence into their work.

  9. Ink-constrained halftoning with application to QR codes

    Science.gov (United States)

    Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary

    2014-01-01

    This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.

  10. Graphical user interface development for the MARS code

    International Nuclear Information System (INIS)

    Jeong, J.-J.; Hwang, M.; Lee, Y.J.; Kim, K.D.; Chung, B.D.

    2003-01-01

    KAERI has developed the best-estimate thermal-hydraulic system code MARS using the RELAP5/MOD3 and COBRA-TF codes. To exploit the excellent features of the two codes, we consolidated the two codes. Then, to improve the readability, maintainability, and portability of the consolidated code, all the subroutines were completely restructured by employing a modular data structure. At present, a major part of the MARS code development program is underway to improve the existing capabilities. The code couplings with three-dimensional neutron kinetics, containment analysis, and transient critical heat flux calculations have also been carried out. At the same time, graphical user interface (GUI) tools have been developed for user friendliness. This paper presents the main features of the MARS GUI. The primary objective of the GUI development was to provide a valuable aid for all levels of MARS users in their output interpretation and interactive controls. Especially, an interactive control function was designed to allow operator actions during simulation so that users can utilize the MARS code like conventional nuclear plant analyzers (NPAs). (author)

  11. Geophysical outlook. Part 8. Interactive interpretation comes of age

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, H.R. Jr.

    1982-05-01

    Computer-aided analysis is the obvious solution to handling the large volumes of geophysical data being generated by today's explorationists. When coupled with new developments in display devices, computer technology is particularly relevant to interactive interpretation of seismic data, particularly for mapping, three-dimensional graphics, and color-coding purpposes.

  12. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  13. Evaluation Codes from Order Domain Theory

    DEFF Research Database (Denmark)

    Andersen, Henning Ejnar; Geil, Hans Olav

    2008-01-01

    bound is easily extended to deal with any generalized Hamming weights. We interpret our methods into the setting of order domain theory. In this way we fill in an obvious gap in the theory of order domains. [28] T. Shibuya and K. Sakaniwa, A Dual of Well-Behaving Type Designed Minimum Distance, IEICE......The celebrated Feng-Rao bound estimates the minimum distance of codes defined by means of their parity check matrices. From the Feng-Rao bound it is clear how to improve a large family of codes by leaving out certain rows in their parity check matrices. In this paper we derive a simple lower bound...... on the minimum distance of codes defined by means of their generator matrices. From our bound it is clear how to improve a large family of codes by adding certain rows to their generator matrices. The new bound is very much related to the Feng-Rao bound as well as to Shibuya and Sakaniwa's bound in [28]. Our...

  14. Cracking the code of oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Philippe G Schyns

    2011-05-01

    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  15. Cognitive Themes Emerging from Air Photo Interpretation Texts Published to 1960

    Directory of Open Access Journals (Sweden)

    Raechel A. Bianchetti

    2015-04-01

    Full Text Available Remotely sensed images are important sources of information for a range of spatial problems. Air photo interpretation emerged as a discipline in response to the need to develop a systematic method for analysis of reconnaissance photographs during World War I. Remote sensing research has focused on the development of automated methods of image analysis, shifting focus away from human interpretation processes. However, automated methods are far from perfect and human interpretation remains an important component of image analysis. One important source of information concerning human image interpretation process is textual guides written within the discipline. These early texts put more emphasis than more recent texts, on the details of the interpretation process, the role of the human in the process, and the cognitive skills involved. In the research reported here, we use content analysis to evaluate the discussion of air photo interpretation in historical texts published between 1922 and 1960. Results indicate that texts from this period emphasized the documentation of relationships between perceptual cues and images features of common interest while reasoning skill and knowledge were discussed less so. The results of this analysis provide a framework of expert image skills needed to perform image interpretation tasks. The framework is useful for informing the design of semi-automated tools for performing analysis.

  16. Linking the plasma code EDGE2D to the neutral code NIMBUS for a self consistent transport model of the boundary

    International Nuclear Information System (INIS)

    De Matteis, A.

    1987-01-01

    This report describes the fully automatic linkage between the finite difference, two-dimensional code EDGE2D, based on the classical Braginskii partial differential equations of ion transport, and the Monte Carlo code NIMBUS, which solves the integral form of the stationary, linear Boltzmann equation for neutral transport in a plasma. The coupling has been performed for the real poloidal geometry of JET with two belt-limiters and real magnetic configurations with or without a single-null point. The new integrated system starts from the magnetic geometry computed by predictive or interpretative equilibrium codes and yields the plasma and neutrals characteristics in the edge

  17. Effect of user interpretation on uncertainty estimates: examples from the air-to-milk transfer of radiocesium

    International Nuclear Information System (INIS)

    Kirchner, G.; Ring Peterson, S.; Bergstroem, U.; Bushell, S.; Davis, P.; Filistovic, V.; Hinton, T.G.; Krajewski, P.; Riesen, T.; Uijt de Haag, P.

    1998-01-01

    An important source of uncertainty in predictions of numerical simulation codes of environmental transport processes arises from the assumptions made by the user when interpreting the model and the scenario to be assessed. This type of uncertainty was examined systematically in this study and was compared with uncertainty due to varying parameter values in a code. Three terrestrial food chain codes that are driven by deposition of radionuclides from the atmosphere were used by up to ten participants to predict total deposition of 137 Cs and concentrations on pasture and in milk for two release scenarios. Collective uncertainty among the predictions of the ten users for concentrations in milk calculated for one scenario by one code was a factor of 2000, while the largest individual uncertainty was 20 times lower. Choice of parameter values contributed most to user-induced uncertainty, followed by scenario interpretation. Due to the significant disparity in predictions, it is recommended that assessments should not be carried out alone by a single code user. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  18. An atlas of human long non-coding RNAs with accurate 5′ ends

    KAUST Repository

    Hon, Chung-Chau

    2017-02-28

    Long non-coding RNAs (lncRNAs) are largely heterogeneous and functionally uncharacterized. Here, using FANTOM5 cap analysis of gene expression (CAGE) data, we integrate multiple transcript collections to generate a comprehensive atlas of 27,919 human lncRNA genes with high-confidence 5′ ends and expression profiles across 1,829 samples from the major human primary cell types and tissues. Genomic and epigenomic classification of these lncRNAs reveals that most intergenic lncRNAs originate from enhancers rather than from promoters. Incorporating genetic and expression data, we show that lncRNAs overlapping trait-associated single nucleotide polymorphisms are specifically expressed in cell types relevant to the traits, implicating these lncRNAs in multiple diseases. We further demonstrate that lncRNAs overlapping expression quantitative trait loci (eQTL)-associated single nucleotide polymorphisms of messenger RNAs are co-expressed with the corresponding messenger RNAs, suggesting their potential roles in transcriptional regulation. Combining these findings with conservation data, we identify 19,175 potentially functional lncRNAs in the human genome.

  19. The small RNA content of human sperm reveals pseudogene-derived piRNAs complementary to protein-coding genes

    DEFF Research Database (Denmark)

    Pantano, Lorena; Jodar, Meritxell; Bak, Mads

    2015-01-01

    -specific genes. The most abundant class of small noncoding RNAs in sperm are PIWI-interacting RNAs (piRNAs). Surprisingly, we found that human sperm cells contain piRNAs processed from pseudogenes. Clusters of piRNAs from human testes contain pseudogenes transcribed in the antisense strand and processed...... into small RNAs. Several human protein-coding genes contain antisense predicted targets of pseudogene-derived piRNAs in the male germline and these piRNAs are still found in mature sperm. Our study provides the most extensive data set and annotation of human sperm small RNAs to date and is a resource...... for further functional studies on the roles of sperm small RNAs. In addition, we propose that some of the pseudogene-derived human piRNAs may regulate expression of their parent gene in the male germline....

  20. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  1. LncRNAWiki: harnessing community knowledge in collaborative curation of human long non-coding RNAs

    KAUST Repository

    Ma, L.

    2014-11-15

    Long non-coding RNAs (lncRNAs) perform a diversity of functions in numerous important biological processes and are implicated in many human diseases. In this report we present lncRNAWiki (http://lncrna.big.ac.cn), a wiki-based platform that is open-content and publicly editable and aimed at community-based curation and collection of information on human lncRNAs. Current related databases are dependent primarily on curation by experts, making it laborious to annotate the exponentially accumulated information on lncRNAs, which inevitably requires collective efforts in community-based curation of lncRNAs. Unlike existing databases, lncRNAWiki features comprehensive integration of information on human lncRNAs obtained from multiple different resources and allows not only existing lncRNAs to be edited, updated and curated by different users but also the addition of newly identified lncRNAs by any user. It harnesses community collective knowledge in collecting, editing and annotating human lncRNAs and rewards community-curated efforts by providing explicit authorship based on quantified contributions. LncRNAWiki relies on the underling knowledge of scientific community for collective and collaborative curation of human lncRNAs and thus has the potential to serve as an up-to-date and comprehensive knowledgebase for human lncRNAs.

  2. HiView: an integrative genome browser to leverage Hi-C results for the interpretation of GWAS variants.

    Science.gov (United States)

    Xu, Zheng; Zhang, Guosheng; Duan, Qing; Chai, Shengjie; Zhang, Baqun; Wu, Cong; Jin, Fulai; Yue, Feng; Li, Yun; Hu, Ming

    2016-03-11

    Genome-wide association studies (GWAS) have identified thousands of genetic variants associated with complex traits and diseases. However, most of them are located in the non-protein coding regions, and therefore it is challenging to hypothesize the functions of these non-coding GWAS variants. Recent large efforts such as the ENCODE and Roadmap Epigenomics projects have predicted a large number of regulatory elements. However, the target genes of these regulatory elements remain largely unknown. Chromatin conformation capture based technologies such as Hi-C can directly measure the chromatin interactions and have generated an increasingly comprehensive catalog of the interactome between the distal regulatory elements and their potential target genes. Leveraging such information revealed by Hi-C holds the promise of elucidating the functions of genetic variants in human diseases. In this work, we present HiView, the first integrative genome browser to leverage Hi-C results for the interpretation of GWAS variants. HiView is able to display Hi-C data and statistical evidence for chromatin interactions in genomic regions surrounding any given GWAS variant, enabling straightforward visualization and interpretation. We believe that as the first GWAS variants-centered Hi-C genome browser, HiView is a useful tool guiding post-GWAS functional genomics studies. HiView is freely accessible at: http://www.unc.edu/~yunmli/HiView .

  3. International Accreditation of ASME Codes and Standards

    International Nuclear Information System (INIS)

    Green, Mervin R.

    1989-01-01

    ASME established a Boiler Code Committee to develop rules for the design, fabrication and inspection of boilers. This year we recognize 75 years of that Code and will publish a history of that 75 years. The first Code and subsequent editions provided for a Code Symbol Stamp or mark which could be affixed by a manufacturer to a newly constructed product to certify that the manufacturer had designed, fabricated and had inspected it in accordance with Code requirements. The purpose of the ASME Mark is to identify those boilers that meet ASME Boiler and Pressure Vessel Code requirements. Through thousands of updates over the years, the Code has been revised to reflect technological advances and changing safety needs. Its scope has been broadened from boilers to include pressure vessels, nuclear components and systems. Proposed revisions to the Code are published for public review and comment four times per year and revisions and interpretations are published annually; it's a living and constantly evolving Code. You and your organizations are a vital part of the feedback system that keeps the Code alive. Because of this dynamic Code, we no longer have columns in newspapers listing boiler explosions. Nevertheless, it has been argued recently that ASME should go further in internationalizing its Code. Specifically, representatives of several countries, have suggested that ASME delegate to them responsibility for Code implementation within their national boundaries. The question is, thus, posed: Has the time come to franchise responsibility for administration of ASME's Code accreditation programs to foreign entities or, perhaps, 'institutes.' And if so, how should this be accomplished?

  4. Interpretation of the CABRI-RAFT LTX test up to pin failure based on detailed data evaluation and PARAS-2S code analysis

    International Nuclear Information System (INIS)

    Fukano, Yoshitaka; Sato, Ikken

    2001-09-01

    The CABRI-RAFT LTX test aims at a study on the fuel-pin-failure mechanism, in-pin fuel motion and post-failure fuel relocation with an annular fuel pin which was pre-irradiated up to peak burn-up of 6.4 at%. The transient test conditions similar to those of the LT4 test were selected in the LTX test using the same type of fuel pin, allowing an effective direct comparison between the two tests. In contrast to the LT4 test which showed a large PCMI-mitigation potential of the annular fuel-pin design, early pin failure occurred in the LTX test when fuel does not seem to have molten. In order to clarify the fuel pin failure mechanism, interpretation of the LTX test up to pin failure is performed in this study, through an experimental data evaluation and a PAPAS-2S-code analysis. The PAPAS-2S code simulates reasonably the fuel thermal conditions such as transient fuel-pin heat-up and fuel melting. The present detailed data evaluation shows that the earlier cladding failure compared with the LT4 test is mainly attributed to the local cladding heat-up. Under the high-temperature condition, plenum gas pressure has a certain potential to explain the observed failure. Fuel swelling-induced PCMI does not seem significant in the LTX test and it may have contributed to the early pin failure only to a limited extent, if any. (author)

  5. Establishing and evaluating bar-code technology in blood sampling system: a model based on human centered human-centered design method.

    Science.gov (United States)

    Chou, Shin-Shang; Yan, Hsiu-Fang; Huang, Hsiu-Ya; Tseng, Kuan-Jui; Kuo, Shu-Chen

    2012-01-01

    This study intended to use a human-centered design study method to develop a bar-code technology in blood sampling process. By using the multilevel analysis to gather the information, the bar-code technology has been constructed to identify the patient's identification, simplify the work process, and prevent medical error rates. A Technology Acceptance Model questionnaire was developed to assess the effectiveness of system and the data of patient's identification and sample errors were collected daily. The average scores of 8 items users' perceived ease of use was 25.21(3.72), 9 items users' perceived usefulness was 28.53(5.00), and 14 items task-technology fit was 52.24(7.09), the rate of patient identification error and samples with order cancelled were down to zero, however, new errors were generated after the new system deployed; which were the position of barcode stickers on the sample tubes. Overall, more than half of nurses (62.5%) were willing to use the new system.

  6. The Challenges of Qualitatively Coding Ancient Texts

    Science.gov (United States)

    Slingerland, Edward; Chudek, Maciej

    2012-01-01

    We respond to several important and valid concerns about our study ("The Prevalence of Folk Dualism in Early China," "Cognitive Science" 35: 997-1007) by Klein and Klein, defending our interpretation of our data. We also argue that, despite the undeniable challenges involved in qualitatively coding texts from ancient cultures,…

  7. Interpreting the Customary Rules on Interpretation

    NARCIS (Netherlands)

    Merkouris, Panos

    2017-01-01

    International courts have at times interpreted the customary rules on interpretation. This is interesting because what is being interpreted is: i) rules of interpretation, which sounds dangerously tautological, and ii) customary law, the interpretation of which has not been the object of critical

  8. Expansion of the CHR bone code system

    International Nuclear Information System (INIS)

    Farnham, J.E.; Schlenker, R.A.

    1976-01-01

    This report describes the coding system used in the Center for Human Radiobiology (CHR) to identify individual bones and portions of bones of a complete skeletal system. It includes illustrations of various bones and bone segments with their respective code numbers. Codes are also presented for bone groups and for nonbone materials

  9. Design of a Handheld Pseudo Random Coded UWB Radar for Human Sensing

    Directory of Open Access Journals (Sweden)

    Xia Zheng-huan

    2015-10-01

    Full Text Available This paper presents the design of a handheld pseudo random coded Ultra-WideBand (UWB radar for human sensing. The main tasks of the radar are to track the moving human object and extract the human respiratory frequency. In order to achieve perfect penetrability and good range resolution, m sequence with a carrier of 800 MHz is chosen as the transmitting signal. The modulated m-sequence can be generated directly by the high-speed DAC and FPGA to reduce the size of the radar system, and the mean power of the transmitting signal is 5 dBm. The receiver has two receiving channels based on hybrid sampling, the first receiving channel is to sample the reference signal and the second receiving channel is to obtain the radar echo. The real-time pulse compression is computed in parallel with a group of on-chip DSP48E slices in FPGA to improve the scanning rate of the radar system. Additionally, the algorithms of moving target tracking and life detection are implemented using Intel’s micro-processor, and the detection results are sent to the micro displayer fixed on the helmet. The experimental results show that the moving target located at less than 16 m far away from the wall can be tracked, and the respiratory frequency of the static human at less than 14 m far away from the wall can be extracted.

  10. A population genetic interpretation of GWAS findings for human quantitative traits

    Science.gov (United States)

    Bullaughey, Kevin; Hudson, Richard R.; Sella, Guy

    2018-01-01

    Human genome-wide association studies (GWASs) are revealing the genetic architecture of anthropomorphic and biomedical traits, i.e., the frequencies and effect sizes of variants that contribute to heritable variation in a trait. To interpret these findings, we need to understand how genetic architecture is shaped by basic population genetics processes—notably, by mutation, natural selection, and genetic drift. Because many quantitative traits are subject to stabilizing selection and because genetic variation that affects one trait often affects many others, we model the genetic architecture of a focal trait that arises under stabilizing selection in a multidimensional trait space. We solve the model for the phenotypic distribution and allelic dynamics at steady state and derive robust, closed-form solutions for summary statistics of the genetic architecture. Our results provide a simple interpretation for missing heritability and why it varies among traits. They predict that the distribution of variances contributed by loci identified in GWASs is well approximated by a simple functional form that depends on a single parameter: the expected contribution to genetic variance of a strongly selected site affecting the trait. We test this prediction against the results of GWASs for height and body mass index (BMI) and find that it fits the data well, allowing us to make inferences about the degree of pleiotropy and mutational target size for these traits. Our findings help to explain why the GWAS for height explains more of the heritable variance than the similarly sized GWAS for BMI and to predict the increase in explained heritability with study sample size. Considering the demographic history of European populations, in which these GWASs were performed, we further find that most of the associations they identified likely involve mutations that arose shortly before or during the Out-of-Africa bottleneck at sites with selection coefficients around s = 10−3. PMID

  11. The moral code in Islam and organ donation in Western countries: reinterpreting religious scriptures to meet utilitarian medical objectives.

    Science.gov (United States)

    Rady, Mohamed Y; Verheijde, Joseph L

    2014-06-02

    End-of-life organ donation is controversial in Islam. The controversy stems from: (1) scientifically flawed medical criteria of death determination; (2) invasive perimortem procedures for preserving transplantable organs; and (3) incomplete disclosure of information to consenting donors and families. Data from a survey of Muslims residing in Western countries have shown that the interpretation of religious scriptures and advice of faith leaders were major barriers to willingness for organ donation. Transplant advocates have proposed corrective interventions: (1) reinterpreting religious scriptures, (2) reeducating faith leaders, and (3) utilizing media campaigns to overcome religious barriers in Muslim communities. This proposal disregards the intensifying scientific, legal, and ethical controversies in Western societies about the medical criteria of death determination in donors. It would also violate the dignity and inviolability of human life which are pertinent values incorporated in the Islamic moral code. Reinterpreting religious scriptures to serve the utilitarian objectives of a controversial end-of-life practice, perceived to be socially desirable, transgresses the Islamic moral code. It may also have deleterious practical consequences, as donors can suffer harm before death. The negative normative consequences of utilitarian secular moral reasoning reset the Islamic moral code upholding the sanctity and dignity of human life.

  12. The Nursing Code of Ethics: Its Value, Its History.

    Science.gov (United States)

    Epstein, Beth; Turner, Martha

    2015-05-31

    To practice competently and with integrity, today's nurses must have in place several key elements that guide the profession, such as an accreditation process for education, a rigorous system for licensure and certification, and a relevant code of ethics. The American Nurses Association has guided and supported nursing practice through creation and implementation of a nationally accepted Code of Ethics for Nurses with Interpretive Statements. This article will discuss ethics in society, professions, and nursing and illustrate how a professional code of ethics can guide nursing practice in a variety of settings. We also offer a brief history of the Code of Ethics, discuss the modern Code of Ethics, and describe the importance of periodic revision, including the inclusive and thorough process used to develop the 2015 Code and a summary of recent changes. Finally, the article provides implications for practicing nurses to assure that this document is a dynamic, useful resource in a variety of healthcare settings.

  13. The new Italian code of medical ethics.

    Science.gov (United States)

    Fineschi, V; Turillazzi, E; Cateni, C

    1997-01-01

    In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746

  14. RCS modeling with the TSAR FDTD code

    Energy Technology Data Exchange (ETDEWEB)

    Pennock, S.T.; Ray, S.L.

    1992-03-01

    The TSAR electromagnetic modeling system consists of a family of related codes that have been designed to work together to provide users with a practical way to set up, run, and interpret the results from complex 3-D finite-difference time-domain (FDTD) electromagnetic simulations. The software has been in development at the Lawrence Livermore National Laboratory (LLNL) and at other sites since 1987. Active internal use of the codes began in 1988 with limited external distribution and use beginning in 1991. TSAR was originally developed to analyze high-power microwave and EMP coupling problems. However, the general-purpose nature of the tools has enabled us to use the codes to solve a broader class of electromagnetic applications and has motivated the addition of new features. In particular a family of near-to-far field transformation routines have been added to the codes, enabling TSAR to be used for radar-cross section and antenna analysis problems.

  15. When Is Coding Scholarship And When Is It Not?

    NARCIS (Netherlands)

    van Zundert, Joris J.; Haentjens Dekker, R.

    2015-01-01

    We argue that the humanities and digital humanities need to consider certain forms of code as scholarly object and certain types of code authorship as a scholarly activity. It will be essential, therefore, to develop a scholarly mode of evaluating and criticizing these scholarly contributions in

  16. Interpretation, compilation and field verification procedures in the CARETS project

    Science.gov (United States)

    Alexander, Robert H.; De Forth, Peter W.; Fitzpatrick, Katherine A.; Lins, Harry F.; McGinty, Herbert K.

    1975-01-01

    The production of the CARETS map data base involved the development of a series of procedures for interpreting, compiling, and verifying data obtained from remote sensor sources. Level II land use mapping from high-altitude aircraft photography at a scale of 1:100,000 required production of a photomosaic mapping base for each of the 48, 50 x 50 km sheets, and the interpretation and coding of land use polygons on drafting film overlays. CARETS researchers also produced a series of 1970 to 1972 land use change overlays, using the 1970 land use maps and 1972 high-altitude aircraft photography. To enhance the value of the land use sheets, researchers compiled series of overlays showing cultural features, county boundaries and census tracts, surface geology, and drainage basins. In producing Level I land use maps from Landsat imagery, at a scale of 1:250,000, interpreters overlaid drafting film directly on Landsat color composite transparencies and interpreted on the film. They found that such interpretation involves pattern and spectral signature recognition. In studies using Landsat imagery, interpreters identified numerous areas of change but also identified extensive areas of "false change," where Landsat spectral signatures but not land use had changed.

  17. Perceived differences between chimpanzee (Pan troglodytes) and human (Homo sapiens) facial expressions are related to emotional interpretation.

    Science.gov (United States)

    Waller, Bridget M; Bard, Kim A; Vick, Sarah-Jane; Smith Pasqualini, Marcia C

    2007-11-01

    Human face perception is a finely tuned, specialized process. When comparing faces between species, therefore, it is essential to consider how people make these observational judgments. Comparing facial expressions may be particularly problematic, given that people tend to consider them categorically as emotional signals, which may affect how accurately specific details are processed. The bared-teeth display (BT), observed in most primates, has been proposed as a homologue of the human smile (J. A. R. A. M. van Hooff, 1972). In this study, judgments of similarity between BT displays of chimpanzees (Pan troglodytes) and human smiles varied in relation to perceived emotional valence. When a chimpanzee BT was interpreted as fearful, observers tended to underestimate the magnitude of the relationship between certain features (the extent of lip corner raise) and human smiles. These judgments may reflect the combined effects of categorical emotional perception, configural face processing, and perceptual organization in mental imagery and may demonstrate the advantages of using standardized observational methods in comparative facial expression research. Copyright 2007 APA.

  18. Verification of the CONPAS (CONtainment Performance Analysis System) code package

    International Nuclear Information System (INIS)

    Kim, See Darl; Ahn, Kwang Il; Song, Yong Man; Choi, Young; Park, Soo Yong; Kim, Dong Ha; Jin, Young Ho.

    1997-09-01

    CONPAS is a computer code package to integrate the numerical, graphical, and results-oriented aspects of Level 2 probabilistic safety assessment (PSA) for nuclear power plants under a PC window environment automatically. For the integrated analysis of Level 2 PSA, the code utilizes four distinct, but closely related modules: (1) ET Editor, (2) Computer, (3) Text Editor, and (4) Mechanistic Code Plotter. Compared with other existing computer codes for Level 2 PSA, and CONPAS code provides several advanced features: computational aspects including systematic uncertainty analysis, importance analysis, sensitivity analysis and data interpretation, reporting aspects including tabling and graphic as well as user-friendly interface. The computational performance of CONPAS has been verified through a Level 2 PSA to a reference plant. The results of the CONPAS code was compared with an existing level 2 PSA code (NUCAP+) and the comparison proves that CONPAS is appropriate for Level 2 PSA. (author). 9 refs., 8 tabs., 14 figs

  19. Purposes of double taxation treaties and interpretation of beneficial owner concept in Ukraine

    OpenAIRE

    Pavlo Selezen

    2017-01-01

    The term ‟beneficial owner” has been interpreted by Ukrainian courts concerning the application of double taxation treaties’ provisions since the adoption of the Tax Code of Ukraine in 2010. Changing nature of the beneficial owner concept, its importance as an instrument for treaty shopping counteraction and the necessity of its proper interpretation in the Ukrainian reality are the main factors that have a strong impact on the development of court practice concerning beneficial ownership....

  20. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  1. A fast and compact Fuel Rod Performance Simulator code for predictive, interpretive and educational purpose

    International Nuclear Information System (INIS)

    Lorenzen, J.

    1990-01-01

    A new Fuel rod Performance Simulator code FRPS has been developed, tested and benchmarked and is now available in different versions. The user may choose between the batch version INTERPIN producing results in form of listings or beforehand defined plots, or the interactive simulator code SIMSIM which is stepping through a power history under the control of user. Both versions are presently running on minicomputers and PC:s using EGA-Graphics. A third version is the implementation in a Studsvik Compact Simulator with FRPS being one of its various modules receiving the dynamic inputs from the simulator

  2. Interpretations of linguistic identity in contemporary social and humanitarian knowledge

    Directory of Open Access Journals (Sweden)

    G. V. Liakhovich

    2015-03-01

    Despite the existence of a plurality of options interpreting linguistic identity, the symbolic, real and imagined projection of categories on language problems can be an innovative approach to the study of linguistic phenomena, as it allows to shift the emphasis from standardized methods to reflective identifying meanings and codes of the phenomenon or process.

  3. Multidomain analyses of a longitudinal human microbiome intestinal cleanout perturbation experiment.

    Science.gov (United States)

    Fukuyama, Julia; Rumker, Laurie; Sankaran, Kris; Jeganathan, Pratheepa; Dethlefsen, Les; Relman, David A; Holmes, Susan P

    2017-08-01

    Our work focuses on the stability, resilience, and response to perturbation of the bacterial communities in the human gut. Informative flash flood-like disturbances that eliminate most gastrointestinal biomass can be induced using a clinically-relevant iso-osmotic agent. We designed and executed such a disturbance in human volunteers using a dense longitudinal sampling scheme extending before and after induced diarrhea. This experiment has enabled a careful multidomain analysis of a controlled perturbation of the human gut microbiota with a new level of resolution. These new longitudinal multidomain data were analyzed using recently developed statistical methods that demonstrate improvements over current practices. By imposing sparsity constraints we have enhanced the interpretability of the analyses and by employing a new adaptive generalized principal components analysis, incorporated modulated phylogenetic information and enhanced interpretation through scoring of the portions of the tree most influenced by the perturbation. Our analyses leverage the taxa-sample duality in the data to show how the gut microbiota recovers following this perturbation. Through a holistic approach that integrates phylogenetic, metagenomic and abundance information, we elucidate patterns of taxonomic and functional change that characterize the community recovery process across individuals. We provide complete code and illustrations of new sparse statistical methods for high-dimensional, longitudinal multidomain data that provide greater interpretability than existing methods.

  4. Coded Shack-Hartmann Wavefront Sensor

    KAUST Repository

    Wang, Congli

    2016-12-01

    Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and expensive setup, or low theoretically achievable resolution. In this thesis, we introduce an optically encoded and computationally decodable novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate and calibrate, highly sensitive, accurate, and with high resolution. Most importantly, using simple optical flow tracking combined with phase smoothness prior, with the help of modern optimization technique, the computational part is split, efficient, and parallelized, hence real time performance has been achieved on Graphics Processing Unit (GPU), with high accuracy as well. This is validated by experimental results. We also show how optical flow intensity consistency term can be derived, using rigor scalar diffraction theory with proper approximation. This is the true physical law behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted as an illumination post-modulated wavefront sensor. This offers a new theoretical approach for wavefront sensor design.

  5. The human PINK1 locus is regulated in vivo by a non-coding natural antisense RNA during modulation of mitochondrial function

    Directory of Open Access Journals (Sweden)

    Wahlestedt Claes

    2007-03-01

    Full Text Available Abstract Background Mutations in the PTEN induced putative kinase 1 (PINK1 are implicated in early-onset Parkinson's disease. PINK1 is expressed abundantly in mitochondria rich tissues, such as skeletal muscle, where it plays a critical role determining mitochondrial structural integrity in Drosophila. Results Herein we characterize a novel splice variant of PINK1 (svPINK1 that is homologous to the C-terminus regulatory domain of the protein kinase. Naturally occurring non-coding antisense provides sophisticated mechanisms for diversifying genomes and we describe a human specific non-coding antisense expressed at the PINK1 locus (naPINK1. We further demonstrate that PINK1 varies in vivo when human skeletal muscle mitochondrial content is enhanced, supporting the idea that PINK1 has a physiological role in mitochondrion. The observation of concordant regulation of svPINK1 and naPINK1 during in vivo mitochondrial biogenesis was confirmed using RNAi, where selective targeting of naPINK1 results in loss of the PINK1 splice variant in neuronal cell lines. Conclusion Our data presents the first direct observation that a mammalian non-coding antisense molecule can positively influence the abundance of a cis-transcribed mRNA under physiological abundance conditions. While our analysis implies a possible human specific and dsRNA-mediated mechanism for stabilizing the expression of svPINK1, it also points to a broader genomic strategy for regulating a human disease locus and increases the complexity through which alterations in the regulation of the PINK1 locus could occur.

  6. Coding potential of the products of alternative splicing in human.

    KAUST Repository

    Leoni, Guido

    2011-01-20

    BACKGROUND: Analysis of the human genome has revealed that as much as an order of magnitude more of the genomic sequence is transcribed than accounted for by the predicted and characterized genes. A number of these transcripts are alternatively spliced forms of known protein coding genes; however, it is becoming clear that many of them do not necessarily correspond to a functional protein. RESULTS: In this study we analyze alternative splicing isoforms of human gene products that are unambiguously identified by mass spectrometry and compare their properties with those of isoforms of the same genes for which no peptide was found in publicly available mass spectrometry datasets. We analyze them in detail for the presence of uninterrupted functional domains, active sites as well as the plausibility of their predicted structure. We report how well each of these strategies and their combination can correctly identify translated isoforms and derive a lower limit for their specificity, that is, their ability to correctly identify non-translated products. CONCLUSIONS: The most effective strategy for correctly identifying translated products relies on the conservation of active sites, but it can only be applied to a small fraction of isoforms, while a reasonably high coverage, sensitivity and specificity can be achieved by analyzing the presence of non-truncated functional domains. Combining the latter with an assessment of the plausibility of the modeled structure of the isoform increases both coverage and specificity with a moderate cost in terms of sensitivity.

  7. Coding potential of the products of alternative splicing in human.

    KAUST Repository

    Leoni, Guido; Le Pera, Loredana; Ferrè , Fabrizio; Raimondo, Domenico; Tramontano, Anna

    2011-01-01

    BACKGROUND: Analysis of the human genome has revealed that as much as an order of magnitude more of the genomic sequence is transcribed than accounted for by the predicted and characterized genes. A number of these transcripts are alternatively spliced forms of known protein coding genes; however, it is becoming clear that many of them do not necessarily correspond to a functional protein. RESULTS: In this study we analyze alternative splicing isoforms of human gene products that are unambiguously identified by mass spectrometry and compare their properties with those of isoforms of the same genes for which no peptide was found in publicly available mass spectrometry datasets. We analyze them in detail for the presence of uninterrupted functional domains, active sites as well as the plausibility of their predicted structure. We report how well each of these strategies and their combination can correctly identify translated isoforms and derive a lower limit for their specificity, that is, their ability to correctly identify non-translated products. CONCLUSIONS: The most effective strategy for correctly identifying translated products relies on the conservation of active sites, but it can only be applied to a small fraction of isoforms, while a reasonably high coverage, sensitivity and specificity can be achieved by analyzing the presence of non-truncated functional domains. Combining the latter with an assessment of the plausibility of the modeled structure of the isoform increases both coverage and specificity with a moderate cost in terms of sensitivity.

  8. The neural dynamics of reward value and risk coding in the human orbitofrontal cortex.

    Science.gov (United States)

    Li, Yansong; Vanni-Mercier, Giovanna; Isnard, Jean; Mauguière, François; Dreher, Jean-Claude

    2016-04-01

    The orbitofrontal cortex is known to carry information regarding expected reward, risk and experienced outcome. Yet, due to inherent limitations in lesion and neuroimaging methods, the neural dynamics of these computations has remained elusive in humans. Here, taking advantage of the high temporal definition of intracranial recordings, we characterize the neurophysiological signatures of the intact orbitofrontal cortex in processing information relevant for risky decisions. Local field potentials were recorded from the intact orbitofrontal cortex of patients suffering from drug-refractory partial epilepsy with implanted depth electrodes as they performed a probabilistic reward learning task that required them to associate visual cues with distinct reward probabilities. We observed three successive signals: (i) around 400 ms after cue presentation, the amplitudes of the local field potentials increased with reward probability; (ii) a risk signal emerged during the late phase of reward anticipation and during the outcome phase; and (iii) an experienced value signal appeared at the time of reward delivery. Both the medial and lateral orbitofrontal cortex encoded risk and reward probability while the lateral orbitofrontal cortex played a dominant role in coding experienced value. The present study provides the first evidence from intracranial recordings that the human orbitofrontal cortex codes reward risk both during late reward anticipation and during the outcome phase at a time scale of milliseconds. Our findings offer insights into the rapid mechanisms underlying the ability to learn structural relationships from the environment. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. HOW TO REPRESENT THE GENETIC CODE?

    Directory of Open Access Journals (Sweden)

    N.S. Santos-Magalhães

    2004-05-01

    Full Text Available The advent of molecular genetic comprises a true revolution of far-reaching consequences for human-kind, which evolved into a specialized branch of the modern-day Biochemistry. The analysis of specicgenomic information are gaining wide-ranging interest because of their signicance to the early diag-nosis of disease, and the discovery of modern drugs. In order to take advantage of a wide assortmentof signal processing (SP algorithms, the primary step of modern genomic SP involves convertingsymbolic-DNA sequences into complex-valued signals. How to represent the genetic code? Despitebeing extensively known, the DNA mapping into proteins is one of the relevant discoveries of genetics.The genetic code (GC is revisited in this work, addressing other descriptions for it, which can beworthy for genomic SP. Three original representations are discussed. The inner-to-outer map buildson the unbalanced role of nucleotides of a codon. A two-dimensional-Gray genetic representationis oered as a structured map that can help interpreting DNA spectrograms or scalograms. Theseare among the powerful visual tools for genome analysis, which depends on the choice of the geneticmapping. Finally, the world-chart for the GC is investigated. Evoking the cyclic structure of thegenetic mapping, it can be folded joining the left-right borders, and the top-bottom frontiers. As aresult, the GC can be drawn on the surface of a sphere resembling a world-map. Eight parallels oflatitude are required (four in each hemisphere as well as four meridians of longitude associated tofour corresponding anti-meridians. The tropic circles have 11.25o, 33.75o, 56.25o, and 78.5o (Northand South. Starting from an arbitrary Greenwich meridian, the meridians of longitude can be plottedat 22.5o, 67.5o, 112.5o, and 157.5o (East and West. Each triplet is assigned to a single point on thesurface that we named Nirenberg-Kohamas Earth. Despite being valuable, usual representations forthe GC can be

  10. Objective interpretation as conforming interpretation

    Directory of Open Access Journals (Sweden)

    Lidka Rodak

    2011-12-01

    Full Text Available The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and should be rather linked with conforming interpretation. And what this actually implies is that it is not the virtue of certainty and predictability – which are usually associated with objectivity- but coherence that makes the foundation of applicability of objectivity in law.What could be observed from the analyses, is that both the phenomenon of conforming interpretation and objective interpretation play the role of arguments in the interpretive discourse, arguments that provide justification that interpretation is not arbitrary or subjective. With regards to the important part of the ideology of legal application which is the conviction that decisions should be taken on the basis of law in order to exclude arbitrariness, objective interpretation could be read as a question “what kind of authority “supports” certain interpretation”? that is almost never free of judicial creativity and judicial activism.One can say that, objective and conforming interpretation are just another arguments used in legal discourse.

  11. Input parameters to codes which analyze LMFBR wire-wrapped bundles

    International Nuclear Information System (INIS)

    Hawley, J.T.; Chan, Y.N.; Todreas, N.E.

    1980-12-01

    This report provides a current summary of recommended values of key input parameters required by ENERGY code analysis of LMFBR wire wrapped bundles. This data is based on the interpretation of experimental results from the MIT and other available laboratory programs

  12. Theoretical interpretation of SCARABEE single pin in-pile boiling experiments

    International Nuclear Information System (INIS)

    Struwe, D.; Bottoni, M.; Fries, W.; Elbel, H.; Angerer, G.

    1977-01-01

    In the framework of LMFBR safety analysis a theoretical interpretation of some of the most representative of the single pin experiments of the in-pile SCARABEE project has been performed from both viewpoints of thermohydraulic and fuel behaviour using the computer codes CAPRI-2 and SATURN-1. The analysis is aimed at investigating the pin behavior from the preirradiation history, through the observed sequence of events following a coolant mass flow reduction from boiling inception up to pin breakdown. A comparison of theoretical results with experimentally recorded data has allowed a deeper insight into the peculiar features of the experiments and enabled a valuable code verification. (Auth.)

  13. On coding genotypes for genetic markers with multiple alleles in genetic association study of quantitative traits

    Directory of Open Access Journals (Sweden)

    Wang Tao

    2011-09-01

    Full Text Available Abstract Background In genetic association study of quantitative traits using F∞ models, how to code the marker genotypes and interpret the model parameters appropriately is important for constructing hypothesis tests and making statistical inferences. Currently, the coding of marker genotypes in building F∞ models has mainly focused on the biallelic case. A thorough work on the coding of marker genotypes and interpretation of model parameters for F∞ models is needed especially for genetic markers with multiple alleles. Results In this study, we will formulate F∞ genetic models under various regression model frameworks and introduce three genotype coding schemes for genetic markers with multiple alleles. Starting from an allele-based modeling strategy, we first describe a regression framework to model the expected genotypic values at given markers. Then, as extension from the biallelic case, we introduce three coding schemes for constructing fully parameterized one-locus F∞ models and discuss the relationships between the model parameters and the expected genotypic values. Next, under a simplified modeling framework for the expected genotypic values, we consider several reduced one-locus F∞ models from the three coding schemes on the estimability and interpretation of their model parameters. Finally, we explore some extensions of the one-locus F∞ models to two loci. Several fully parameterized as well as reduced two-locus F∞ models are addressed. Conclusions The genotype coding schemes provide different ways to construct F∞ models for association testing of multi-allele genetic markers with quantitative traits. Which coding scheme should be applied depends on how convenient it can provide the statistical inferences on the parameters of our research interests. Based on these F∞ models, the standard regression model fitting tools can be used to estimate and test for various genetic effects through statistical contrasts with the

  14. The Art Of Interpretation – Chances And Risks On Interpretation In The Field Of Mobile Testing

    Directory of Open Access Journals (Sweden)

    Palatini Kerstin

    2014-12-01

    Full Text Available Carrying out a usability test is a demanding process per se. Mobile tests raise this claim because they are subject to real usage conditions and therefore unforeseeable factors. On the one hand there are the technical factors like tools, software and laboratory equipment, but on the other hand there are the human beeings with their knowledge and decision-making. They are taking the selection of tools, methods and data, and they decide in every situation of the process of testing. Using a mobile eye-tracking test, the authors will explain where the sources for interpretation are and when misinterpretation become an error. Technology philosophical considerations on interpretation and hermeneutics have to support the recognition of the potential of interpretation. As a result, misinterpretation can be minimized.

  15. Is the international safety management code an organisational tool ...

    African Journals Online (AJOL)

    The birth of the International Management Code for the Safe Operation of Ships and for Pollution Prevention (hereinafter ISM Code) is said to a reaction to the sinking of the Herald of free Enterprise on 6th March 1987.The human element is said to be a generic term used to describe what makes humans behave the way ...

  16. Isolation and sequencing of a cDNA coding for the human DF3 breast carcinoma-associated antigen

    International Nuclear Information System (INIS)

    Siddiqui, J.; Abe, M.; Hayes, D.; Shani, E.; Yunis, E.; Kufe, D.

    1988-01-01

    The murine monoclonal antibody (mAb) DF3 reacts with a high molecular weight glycoprotein detectable in human breast carcinomas. DF3 antigen expression correlates with human breast tumor differentiation, and the detection of a cross-reactive species in human milk has suggested that this antigen might be useful as a marker of differentiated mammary epithelium. To further characterize DF3 antigen expression, the authors have isolated a cDNA clone from a λgt11 library by screening with mAb DF3. The results demonstrate that this 309-base-pair cDNA, designated pDF9.3, codes for the DF3 epitope. Southern blot analyses of EcoRI-digested DNAs from six human tumor cell lines with 32 P-labeled pDF9.3 have revealed a restriction fragment length polymorphism. Variations in size of the alleles detected by pDF9.3 were also identified in Pst I, but not in HindIII, DNA digests. Furthermore, hybridization of 32 P-labeled pDF9.3 with total cellular RNA from each of these cell lines demonstrated either one or two transcripts that varied from 4.1 to 7.1 kilobases in size. The presence of differently sized transcripts detected by pDF9.3 was also found to correspond with the polymorphic expression of DF3 glycoproteins. Nucleotide sequence analysis of pDF9.3 has revealed a highly conserved (G + C)-rich 60-base-pair tandem repeat. These findings suggest that the variation in size of alleles coding for the polymorphic DF3 glycoprotein may represent different numbers of repeats

  17. Personal literary interpretation

    Directory of Open Access Journals (Sweden)

    Michał Januszkiewicz

    2015-11-01

    Full Text Available The article titled “Personal literary interpretation” deals with problems which have usually been marginalized in literary studies, but which seem to be very important in the context of the humanities, as broadly defined. The author of this article intends to rethink the problem of literary studies not in objective, but in personal terms. This is why the author wants to talk about what he calls personal literary interpretation, which has nothing to do with subjective or irrational thinking, but which is rather grounded in the hermeneutical rule that says that one must believe in order tounderstand a text or the other (where ‘believe’ also means: ‘to love’, ‘engage’, and ‘be open’. The article presents different determinants of this attitude, ranging from Dilthey to Heidegger and Gadamer. Finally, the author subscribes to the theory of personal interpretation, which is always dialogical.

  18. The Unity-Of-Value as a Theory of Interpretation

    Directory of Open Access Journals (Sweden)

    Allan Gomes Moreira

    2016-12-01

    Full Text Available This paper aims to address the unity of value as a general theory of interpretation. By overcoming the supposed division between law and morality, considering the interpretive activity the central element of a normative theory of all human endeavor, Dworkin highlights the limitations of legal positivism designed by Hart to provide adequate solutions to the "hard cases" and broadens the spectrum to find a "correct answer" to specific cases in a normative theory linked to political morality, expressed by the interpretive activity, whose meaning will be the value assigned by the interpreter to a particular event or object.

  19. Investigating the Simulink Auto-Coding Process

    Science.gov (United States)

    Gualdoni, Matthew J.

    2016-01-01

    Model based program design is the most clear and direct way to develop algorithms and programs for interfacing with hardware. While coding "by hand" results in a more tailored product, the ever-growing size and complexity of modern-day applications can cause the project work load to quickly become unreasonable for one programmer. This has generally been addressed by splitting the product into separate modules to allow multiple developers to work in parallel on the same project, however this introduces new potentials for errors in the process. The fluidity, reliability and robustness of the code relies on the abilities of the programmers to communicate their methods to one another; furthermore, multiple programmers invites multiple potentially differing coding styles into the same product, which can cause a loss of readability or even module incompatibility. Fortunately, Mathworks has implemented an auto-coding feature that allows programmers to design their algorithms through the use of models and diagrams in the graphical programming environment Simulink, allowing the designer to visually determine what the hardware is to do. From here, the auto-coding feature handles converting the project into another programming language. This type of approach allows the designer to clearly see how the software will be directing the hardware without the need to try and interpret large amounts of code. In addition, it speeds up the programming process, minimizing the amount of man-hours spent on a single project, thus reducing the chance of human error as well as project turnover time. One such project that has benefited from the auto-coding procedure is Ramses, a portion of the GNC flight software on-board Orion that has been implemented primarily in Simulink. Currently, however, auto-coding Ramses into C++ requires 5 hours of code generation time. This causes issues if the tool ever needs to be debugged, as this code generation will need to occur with each edit to any part of

  20. 77 FR 38173 - Child Labor Regulations, Orders and Statements of Interpretation

    Science.gov (United States)

    2012-06-27

    ... DEPARTMENT OF LABOR Wage and Hour Division 29 CFR Part 570 Child Labor Regulations, Orders and Statements of Interpretation CFR Correction 0 In Title 29 of the Code of Federal Regulations, Parts 500 to 899, revised as of July 1, 2011, on page 302, the section heading for Sec. 570.65 is corrected to read...

  1. QR CODES IN EDUCATION AND COMMUNICATION

    Directory of Open Access Journals (Sweden)

    Gurhan DURAK

    2016-04-01

    Full Text Available Technological advances brought applications of innovations to education. Conventional education increasingly flourishes with new technologies accompanied by more learner active environments. In this continuum, there are learners preferring self-learning. Traditional learning materials yield attractive, motivating and technologically enhanced learning materials. The QR (Quick Response Codes are one of these innovations. The aim of this study is to redesign a lesson unit supported with QR Codes and to get the learner views about the redesigned material. For this purpose, the redesigned lesson unit was delivered to 15 learners in Balıkesir University in the academic year of 2013-2014. The learners were asked to study the material. The learners who had smart phones and Internet access were chosen for the study. To provide sectional diversity, three groups were created. The group learners were from Faculty of Education, Faculty of Science and Literature and Faculty of Engineering. After the semi-structured interviews were held, the learners were asked about their pre-knowledge about QR Codes, QR Codes’ contribution to learning, difficulties with using QR Codes about and design issues. Descriptive data analysis was used in the study. The findings were interpreted on the basis of Theory of Diffusion of Innovations and Theory of Uses and Gratifications. After the research, the themes found were awareness of QR Code, types of QR Codes and applications, contributions to learning, and proliferation of QR Codes. Generally, the learners participating in the study reported that they were aware of QR Codes; that they could use the QR Codes; and that using QR Codes in education was useful. They also expressed that such features as visual elements, attractiveness and direct routing had positive impact on learning. In addition, they generally mentioned that they did not have any difficulty using QR Codes; that they liked the design; and that the content should

  2. Interpretation, with respect to ASME code Case N-318, of limit moment and fatigue tests of lugs welded to pipe

    International Nuclear Information System (INIS)

    Foster, D.C.; Van Duyne, D.A.; Budlong, L.A.; Muffett, J.W.; Wais, E.A.; Streck, G.; Rodabaugh, E.C.

    1990-01-01

    Two nonmandatory ASME code cases have been used often in the evaluation of lugs on nuclear-power- plant piping systems. ASME Code Case N-318 provides guidance for evaluation of the design of rectangular cross-section attachments on Class 2 or 3 piping, and ASME Code Case N-122 provides guidance for evaluation of lugs on Class 1 piping. These code cases have been reviewed and evaluated based on available test data. The results indicate that the Code cases are overly conservative. Recommendations for revisions to the cases are presented which, if adopted, will reduce the overconservatism

  3. Situational Affordance - Appreciating human Interpretations in New Product Development

    DEFF Research Database (Denmark)

    Mathiasen, John Bang; Koch, Christian

    2009-01-01

    New Product Development (NPD) takes place within a web of connected actors who do not master fully the objects of the process, but rather they interpret the affordance - the enabling and constraining framing by objects such as sketches, drawings, specifications, mock-ups and prototypes. The artic...

  4. Umberto Eco: Freedom and the limits of artwork interpretation

    Directory of Open Access Journals (Sweden)

    Kršić Lamija

    2017-01-01

    Full Text Available In his study The Open Work (1962 Umberto Eco commenced his long term exploration of the nature of interpreting an artwork. The author problematizes the question of freedom and the limits of artwork interpretation in the context of semiology and theory of semiotics. Although the contemporary debates on interpreting art focus on interpretation and interpreter, that is to say, on unconditional openness of the work, Eco reminds us of the other side of the interpreting process - controllability of the interpretative activity and limitations of reading in new meanings. The paper points to the various theoretical basis on which Eco propagates the dialectics between the form and openness of a work of art, in other words, between intentio operis and intentio lectoris. This position enables him to differentiate art from nonart, as well as to consider interpretation of a work of art not as a self-sufficing process, but from the point of view of human experience.

  5. Physiological conditions for the effective interpretation of radiographic images

    International Nuclear Information System (INIS)

    Overington, I.

    1989-01-01

    A wide range of factors influence the ability of the human observer to perceive detail in images. Most of these factors are of some significance in interpretation of one or more types of radiographic image. Human observer performance may be conveniently categorized in terms of multiparametric threshold surfaces, suprathreshold visibility and observer variance. The general multiparametric trends of human threshold performance are discussed, together with the implications for visibility. The importance and implications of observer variance are then explored, with particular reference to their effects on search processes. Finally, attempts are made to highlight the implications of some of the factors on typical radiographic interpretation tasks and on the adequacy of certain types of phantom image used for equipment calibration. (author)

  6. From "Forest Fires" and "Hunting" to Disturbing "Habitats" and "Food Chains": Do Young Children Come Up with Any Ecological Interpretations of Human Interventions within a Forest?

    Science.gov (United States)

    Ergazaki, Marida; Andriotou, Eirini

    2010-01-01

    This study aims at highlighting young children's reasoning about human interventions within a forest ecosystem. Our focus is particularly set on whether preschoolers are able to come up with any basic ecological interpretations of human actions upon forest plants or animals and how. Conducting individual, semi-structured interviews with 70…

  7. Brain regions involved in observing and trying to interpret dog behaviour.

    Science.gov (United States)

    Desmet, Charlotte; van der Wiel, Alko; Brass, Marcel

    2017-01-01

    Humans and dogs have interacted for millennia. As a result, humans (and especially dog owners) sometimes try to interpret dog behaviour. While there is extensive research on the brain regions that are involved in mentalizing about other peoples' behaviour, surprisingly little is known of whether we use these same brain regions to mentalize about animal behaviour. In this fMRI study we investigate whether brain regions involved in mentalizing about human behaviour are also engaged when observing dog behaviour. Here we show that these brain regions are more engaged when observing dog behaviour that is difficult to interpret compared to dog behaviour that is easy to interpret. Interestingly, these results were not only obtained when participants were instructed to infer reasons for the behaviour but also when they passively viewed the behaviour, indicating that these brain regions are activated by spontaneous mentalizing processes.

  8. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  9. Altered expression of long non-coding RNAs during genotoxic stress-induced cell death in human glioma cells.

    Science.gov (United States)

    Liu, Qian; Sun, Shanquan; Yu, Wei; Jiang, Jin; Zhuo, Fei; Qiu, Guoping; Xu, Shiye; Jiang, Xuli

    2015-04-01

    Long non-coding RNAs (lncRNAs), a recently discovered class of non-coding genes, are transcribed throughout the genome. Emerging evidence suggests that lncRNAs may be involved in modulating various aspects of tumor biology, including regulating gene activity in response to external stimuli or DNA damage. No data are available regarding the expression of lncRNAs during genotoxic stress-induced apoptosis and/or necrosis in human glioma cells. In this study, we detected a change in the expression of specific candidate lncRNAs (neat1, GAS5, TUG1, BC200, Malat1, MEG3, MIR155HG, PAR5, and ST7OT1) during DNA damage-induced apoptosis in human glioma cell lines (U251 and U87) using doxorubicin (DOX) and resveratrol (RES). We also detected the expression pattern of these lncRNAs in human glioma cell lines under necrosis induced using an increased dose of DOX. Our results reveal that the lncRNA expression patterns are distinct between genotoxic stress-induced apoptosis and necrosis in human glioma cells. The sets of lncRNA expressed during genotoxic stress-induced apoptosis were DNA-damaging agent-specific. Generally, MEG3 and ST7OT1 are up-regulated in both cell lines under apoptosis induced using both agents. The induction of GAS5 is only clearly detected during DOX-induced apoptosis, whereas the up-regulation of neat1 and MIR155HG is only found during RES-induced apoptosis in both cell lines. However, TUG1, BC200 and MIR155HG are down regulated when necrosis is induced using a high dose of DOX in both cell lines. In conclusion, our findings suggest that the distinct regulation of lncRNAs may possibly involve in the process of cellular defense against genotoxic agents.

  10. What Does It Take to Produce Interpretation? Informational, Peircean, and Code-Semiotic Views on Biosemiotics

    Energy Technology Data Exchange (ETDEWEB)

    Brier, Soren; Joslyn, Cliff A.

    2013-04-01

    This paper presents a critical analysis of code-semiotics, which we see as the latest attempt to create paradigmatic foundation for solving the question of the emergence of life and consciousness. We view code semiotics as a an attempt to revise the empirical scientific Darwinian paradigm, and to go beyond the complex systems, emergence, self-organization, and informational paradigms, and also the selfish gene theory of Dawkins and the Peircean pragmaticist semiotic theory built on the simultaneous types of evolution. As such it is a new and bold attempt to use semiotics to solve the problems created by the evolutionary paradigm’s commitment to produce a theory of how to connect the two sides of the Cartesian dualistic view of physical reality and consciousness in a consistent way.

  11. The beta equilibrium, stability, and transport codes. Applications to the design of stellarators

    International Nuclear Information System (INIS)

    Bauer, F.; Garabedian, P.; Betancourt, O.; Wakatani, M.

    1987-01-01

    This book gives a detailed exposition of the available computational methods, documents the codes, and presents many examples showing how to run them and how to interpret the results. A listing of the recently completed BETA transport code is included. Current stellarator experiments are discussed, and the book contains significant applications to the design of major new stellarator experiments that are now in the planning stage

  12. PHENOMENOLOGICAL INTERPRETATION OF BIOETHICAL REALITY (THE SOCIOLOGICAL ANALYSIS)

    OpenAIRE

    Nikulina Marina Alekseevna

    2012-01-01

    The interpretation of social reality is a classical problem of sociology, which solution helps perception and understanding of social phenomena. In the article phenomenological interpretation of bioethical reality is shown. Phenomenological sociology, being one of the perspective directions of development of social knowledge, it is characterized by aspiration to show «artificial», that is designed, nature of bioethical reality, its semantic structure, and thus, to «humanize» bioethical realit...

  13. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    Science.gov (United States)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for

  14. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  15. Multidomain analyses of a longitudinal human microbiome intestinal cleanout perturbation experiment.

    Directory of Open Access Journals (Sweden)

    Julia Fukuyama

    2017-08-01

    Full Text Available Our work focuses on the stability, resilience, and response to perturbation of the bacterial communities in the human gut. Informative flash flood-like disturbances that eliminate most gastrointestinal biomass can be induced using a clinically-relevant iso-osmotic agent. We designed and executed such a disturbance in human volunteers using a dense longitudinal sampling scheme extending before and after induced diarrhea. This experiment has enabled a careful multidomain analysis of a controlled perturbation of the human gut microbiota with a new level of resolution. These new longitudinal multidomain data were analyzed using recently developed statistical methods that demonstrate improvements over current practices. By imposing sparsity constraints we have enhanced the interpretability of the analyses and by employing a new adaptive generalized principal components analysis, incorporated modulated phylogenetic information and enhanced interpretation through scoring of the portions of the tree most influenced by the perturbation. Our analyses leverage the taxa-sample duality in the data to show how the gut microbiota recovers following this perturbation. Through a holistic approach that integrates phylogenetic, metagenomic and abundance information, we elucidate patterns of taxonomic and functional change that characterize the community recovery process across individuals. We provide complete code and illustrations of new sparse statistical methods for high-dimensional, longitudinal multidomain data that provide greater interpretability than existing methods.

  16. An Interpreter's Interpretation: Sign Language Interpreters' View of Musculoskeletal Disorders

    National Research Council Canada - National Science Library

    Johnson, William L

    2003-01-01

    Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...

  17. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  18. System verification and validation report for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the Verification and Validation Report for the TMAD code system, which includes the TMAD code and the LIBMAKR Code. The TMAD code was commissioned to facilitate the interpretation of moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, can also be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code will then interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data. The primary purpose of this document is to provide the results of the system testing and the conclusions based thereon. The results of the testing process are documented in the body of the report. Appendix A gives the test plan, including test procedures, used in conducting the tests. Appendix B lists the input data required to conduct the tests, and Appendices C and 0 list the numerical results of the tests

  19. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells.

    Directory of Open Access Journals (Sweden)

    Claudia V Filomatori

    2017-03-01

    Full Text Available The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs, relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of

  20. The Age of Interpretation

    Directory of Open Access Journals (Sweden)

    Gianni Vattimo

    2013-01-01

    Full Text Available Gianni Vattimo, who is both a Catholic and a frequent critic of the Church, explores the surprising congruence between Christianity and hermeneutics in light of the dissolution of metaphysical truth. As in hermeneutics, Vatimo claims, interpretation is central to Christianity. Influenced by hermeneutics and borrowing largely from the Nietzschean and Heideggerian heritage, the Italian philosopher, who has been instrumental in promoting a nihilistic approach to Christianity, draws here on Nietzsche’s writings on nihilism, which is not to be understood in a purely negative sense. Vattimo suggests that nihilism not only expands the Christian message of charity, but also transforms it into its endless human potential. In “The Age of Interpretation,” the author shows that hermeneutical radicalism “reduces all reality to message,” so that the opposition between facts and norms turns out to be misguided, for both are governed by the interpretative paradigms through which someone (always a concrete, historically situated someone makes sense of them. Vattimo rejects some of the deplorable political consequences of hermeneutics and claims that traditional hermeneutics is in collusion with various political-ideological neutralizations.

  1. Cloning and expression of a cDNA coding for a human monocyte-derived plasminogen activator inhibitor.

    OpenAIRE

    Antalis, T M; Clark, M A; Barnes, T; Lehrbach, P R; Devine, P L; Schevzov, G; Goss, N H; Stephens, R W; Tolstoshev, P

    1988-01-01

    Human monocyte-derived plasminogen activator inhibitor (mPAI-2) was purified to homogeneity from the U937 cell line and partially sequenced. Oligonucleotide probes derived from this sequence were used to screen a cDNA library prepared from U937 cells. One positive clone was sequenced and contained most of the coding sequence as well as a long incomplete 3' untranslated region (1112 base pairs). This cDNA sequence was shown to encode mPAI-2 by hybrid-select translation. A cDNA clone encoding t...

  2. Adolescents’ Interpretation of the Concept of Wellness: A Qualitative Study

    Directory of Open Access Journals (Sweden)

    Ezihe Loretta Ahanonu

    2016-12-01

    Full Text Available Introduction: This study sought to explore and describe the interpretation which adolescents ascribe to the term wellness at a selected high school in the Western Cape Province of South Africa. Methods: A qualitative research design was utilized. Nine focus-group discussions were conducted among 58 adolescents. Sample was selected purposefully and collected data was analyzed using open coding. Results: Findings reflected adolescents’ interpretations of the term wellness in the realm of holistic well-being transcending the nonexistence of illness or sickness in the body. The interpretations given include: healthy living which embrace eating enough nutritious foods, exercising regularly and being actively involved in physical activities; practicing self-care habits such as personal hygiene and grooming; well-being of the mind (psychological, emotional; having a balanced personality and interpersonal processes; being focused and goal directed and spiritual well-being. Conclusion: It is imperative to consider adolescents’ understandings of wellness when planning, designing, implementing and evaluating adolescent wellness programs.

  3. 21 CFR 206.10 - Code imprint required.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Code imprint required. 206.10 Section 206.10 Food...: GENERAL IMPRINTING OF SOLID ORAL DOSAGE FORM DRUG PRODUCTS FOR HUMAN USE § 206.10 Code imprint required... imprint that, in conjunction with the product's size, shape, and color, permits the unique identification...

  4. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  5. Computer codes incorporating pre-equilibrium decay

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    After establishing the need to describe the high-energy particle spectrum which is evident in the experimental data, the various models used in the interpretation are presented. This includes the following: a) Cascade Model; b) Fermi-Gas Relaxation Model; c) Exciton Model; d) Hybrid and Geometry-Dependent Model. The codes description and preparation of input data for STAPRE was presented (Dr. Strohmaier). A simulated output was employed for a given input and comparison with experimental data substantiated the rather sophisticated treatment. (author)

  6. Radical behaviorist interpretation: Generating and evaluating an account of consumer behavior.

    Science.gov (United States)

    Foxall, G R

    1998-01-01

    This article considers an approach to the radical behaviorist interpretation of complex human social behavior. The chosen context is consumer psychology, a field currently dominated by cognitive models of purchase and consumption. The nature of operant interpretation is considered, and several levels of operant analysis of complex economic behavior in affluent marketing-oriented economies are developed. Empirical evidence for the interpretation is considered, and a case is made for the qualified use of the hypothetico-deductive method in the appraisal of operant interpretations of complex behaviors.

  7. Coding hazardous tree failures for a data management system

    Science.gov (United States)

    Lee A. Paine

    1978-01-01

    Codes for automatic data processing (ADP) are provided for hazardous tree failure data submitted on Report of Tree Failure forms. Definitions of data items and suggestions for interpreting ambiguously worded reports are also included. The manual is intended to insure the production of accurate and consistent punched ADP cards which are used in transfer of the data to...

  8. Coding strategies in number space : Memory requirements influence spatial-numerical associations

    NARCIS (Netherlands)

    Lindemann, Oliver; Abolafia, Juan M.; Pratt, Jay; Bekkering, Harold

    The tendency to respond faster with the left hand to relatively small numbers and faster with the right hand to relatively large numbers (spatial numerical association of response codes, SNARC effect) has been interpreted as an automatic association of spatial and numerical information. We

  9. Code for Internal Dosimetry (CINDY)

    International Nuclear Information System (INIS)

    Strenge, D.L.; Peloquin, R.A.; Sula, M.J.; Johnson, J.R.

    1990-10-01

    The CINDY (Code for Internal Dosimetry) Software Package has been developed by Pacific Northwest Laboratory to address the Department of Energy (DOE) Order 5480.11 by providing the capabilities to calculate organ dose equivalents and effective dose equivalents using the approach of International Commission on Radiological Protection (ICRP) 30. The code assist in the interpretation of bioassay data, evaluates committed and calendar-year doses from intake or bioassay measurement data, provides output consistent with revised DOE orders, is easy to use, and is generally applicable to DOE sites. Flexible biokinetics models are used to determine organ doses for annual, 50-year, calendar-year, or any other time-point dose necessary for chronic or acute intakes. CINDY is an interactive program that prompts the user to describe the cases to be analyzed and calculates the necessary results for the type of analysis being performed. Four types of analyses may be specified. 92 figs., 10 tabs

  10. Primate-specific spliced PMCHL RNAs are non-protein coding in human and macaque tissues

    Directory of Open Access Journals (Sweden)

    Delerue-Audegond Audrey

    2008-12-01

    Full Text Available Abstract Background Brain-expressed genes that were created in primate lineage represent obvious candidates to investigate molecular mechanisms that contributed to neural reorganization and emergence of new behavioural functions in Homo sapiens. PMCHL1 arose from retroposition of a pro-melanin-concentrating hormone (PMCH antisense mRNA on the ancestral human chromosome 5p14 when platyrrhines and catarrhines diverged. Mutations before divergence of hylobatidae led to creation of new exons and finally PMCHL1 duplicated in an ancestor of hominids to generate PMCHL2 at the human chromosome 5q13. A complex pattern of spliced and unspliced PMCHL RNAs were found in human brain and testis. Results Several novel spliced PMCHL transcripts have been characterized in human testis and fetal brain, identifying an additional exon and novel splice sites. Sequencing of PMCHL genes in several non-human primates allowed to carry out phylogenetic analyses revealing that the initial retroposition event took place within an intron of the brain cadherin (CDH12 gene, soon after platyrrhine/catarrhine divergence, i.e. 30–35 Mya, and was concomitant with the insertion of an AluSg element. Sequence analysis of the spliced PMCHL transcripts identified only short ORFs of less than 300 bp, with low (VMCH-p8 and protein variants or no evolutionary conservation. Western blot analyses of human and macaque tissues expressing PMCHL RNA failed to reveal any protein corresponding to VMCH-p8 and protein variants encoded by spliced transcripts. Conclusion Our present results improve our knowledge of the gene structure and the evolutionary history of the primate-specific chimeric PMCHL genes. These genes produce multiple spliced transcripts, bearing short, non-conserved and apparently non-translated ORFs that may function as mRNA-like non-coding RNAs.

  11. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    Science.gov (United States)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  12. Situational Affordance - Appreciating human Interpretations in New Product Development

    DEFF Research Database (Denmark)

    Mathiasen, John Bang; Koch, Christian

    2009-01-01

    New Product Development (NPD) takes place within a web of connected actors who do not master fully the objects of the process, but rather they interpret the affordance - the en-abling and constraining framing by objects such as sketches, drawings, specifications, mock-ups and prototypes. The arti......New Product Development (NPD) takes place within a web of connected actors who do not master fully the objects of the process, but rather they interpret the affordance - the en-abling and constraining framing by objects such as sketches, drawings, specifications, mock-ups and prototypes....... The article draws on critical comments and extends Hutchby’s concept of affordance. Even if Hutchby does not study an artefact as a micro-level phe-nomenon from an engineering perspective and only considers the consumption of an arte-fact, we use him to study the development of artefacts in NPD. Hutchby...... to negotiate technical and non-technical aspects alike, while Sierra uses a CAD-system to support joint-development crossing organisational boundaries. The cases share features such as triadic relations; customer’s problem, product designers and artefact. For example, a wheelchair is presented to Sierra...

  13. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package

  14. 42 CFR 414.40 - Coding and ancillary policies.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Coding and ancillary policies. 414.40 Section 414.40 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Practitioners § 414.40 Coding and ancillary policies. (a) General rule. CMS establishes uniform national...

  15. PREREM: an interactive data preprocessing code for INREM II. Part I: user's manual. Part II: code structure

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, M.T.; Fields, D.E.

    1981-05-01

    PREREM is an interactive computer code developed as a data preprocessor for the INREM-II (Killough, Dunning, and Pleasant, 1978a) internal dose program. PREREM is intended to provide easy access to current and self-consistent nuclear decay and radionuclide-specific metabolic data sets. Provision is made for revision of metabolic data, and the code is intended for both production and research applications. Documentation for the code is in two parts. Part I is a user's manual which emphasizes interpretation of program prompts and choice of user input. Part II stresses internal structure and flow of program control and is intended to assist the researcher who wishes to revise or modify the code or add to its capabilities. PREREM is written for execution on a Digital Equipment Corporation PDP-10 System and much of the code will require revision before it can be run on other machines. The source program length is 950 lines (116 blocks) and computer core required for execution is 212 K bytes. The user must also have sufficient file space for metabolic and S-factor data sets. Further, 64 100 K byte blocks of computer storage space are required for the nuclear decay data file. Computer storage space must also be available for any output files produced during the PREREM execution. 9 refs., 8 tabs.

  16. Interpretive Media Study and Interpretive Social Science.

    Science.gov (United States)

    Carragee, Kevin M.

    1990-01-01

    Defines the major theoretical influences on interpretive approaches in mass communication, examines the central concepts of these perspectives, and provides a critique of these approaches. States that the adoption of interpretive approaches in mass communication has ignored varied critiques of interpretive social science. Suggests that critical…

  17. Interpreters, Interpreting, and the Study of Bilingualism.

    Science.gov (United States)

    Valdes, Guadalupe; Angelelli, Claudia

    2003-01-01

    Discusses research on interpreting focused specifically on issues raised by this literature about the nature of bilingualism. Suggests research carried out on interpreting--while primarily produced with a professional audience in mind and concerned with improving the practice of interpreting--provides valuable insights about complex aspects of…

  18. Human growth hormone-related latrogenic Creutzfeldt-Jakob disease: Search for a genetic susceptibility by analysis of the PRNP coding region

    Energy Technology Data Exchange (ETDEWEB)

    Jaegly, A.; Boussin, F.; Deslys, J.P. [CEA/CRSSA/DSV/DPTE, Fontenay-aux-Roses (France)] [and others

    1995-05-20

    The human PRNP gene encoding PrP is located on chromosome 20 and consists of two exons and a single intron. The open reading frame is entirely fitted into the second exon. Genetic studies indicate that all of the familial and several sporadic forms of TSSEs are associated with mutations in the PRNP 759-bp coding region. Moreover, homozygosity at codon 129, a locus harboring a polymorphism among the general population, was proposed as a genetic susceptibility marker for both sporadic and iatrogenic CJD. To assess whether additional genetic predisposition markers exist in the PRNP gene, the authors sequenced the PRNP coding region of 17 of the 32 French patients who developed a hGH-related CJD.

  19. Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

    OpenAIRE

    Wang, Tong

    2018-01-01

    Interpretable machine learning models have received increasing interest in recent years, especially in domains where humans are involved in the decision-making process. However, the possible loss of the task performance for gaining interpretability is often inevitable. This performance downgrade puts practitioners in a dilemma of choosing between a top-performing black-box model with no explanations and an interpretable model with unsatisfying task performance. In this work, we propose a nove...

  20. The verification basis of the PM-ALPHA code

    Energy Technology Data Exchange (ETDEWEB)

    Theofanous, T.G.; Yuen, W.W.; Angelini, S. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    An overall verification approach for the PM-ALPHA code is presented and implemented. The approach consists of a stepwise testing procedure focused principally on the multifield aspects of the premixing phenomenon. Breakup is treated empirically, but it is shown that, through reasonable choices of the breakup parameters, consistent interpretations of existing integral premixing experiments can be obtained. The present capability is deemed adequate for bounding energetics evaluations. (author)

  1. Patterns of genic intolerance of rare copy number variation in 59,898 human exomes

    Science.gov (United States)

    Ruderfer, Douglas M.; Hamamsy, Tymor; Lek, Monkol; Karczewski, Konrad J.; Kavanagh, David; Samocha, Kaitlin E.; Daly, Mark J.; MacArthur, Daniel G.; Fromer, Menachem; Purcell, Shaun M.

    2016-01-01

    Copy number variation (CNV) impacting protein-coding genes contributes significantly to human diversity and disease. Here we characterized the rates and properties of rare genic CNV (intolerance to CNVs that demonstrated moderate correlation with measures of genic constraint based on single-nucleotide variation (SNV) and was independently correlated with measures of evolutionary conservation. For individuals with schizophrenia, genes impacted by CNVs were more intolerant than in controls. ExAC CNV data constitutes a critical component of an integrated database spanning the spectrum of human genetic variation, aiding the interpretation of personal genomes as well as population-based disease studies. These data are freely available for download and visualization online. PMID:27533299

  2. Purposes of double taxation treaties and interpretation of beneficial owner concept in Ukraine

    Directory of Open Access Journals (Sweden)

    Pavlo Selezen

    2017-10-01

    Full Text Available The term ‟beneficial owner” has been interpreted by Ukrainian courts concerning the application of double taxation treaties’ provisions since the adoption of the Tax Code of Ukraine in 2010. Changing nature of the beneficial owner concept, its importance as an instrument for treaty shopping counteraction and the necessity of its proper interpretation in the Ukrainian reality are the main factors that have a strong impact on the development of court practice concerning beneficial ownership. The article focuses on the prevention of tax avoidance as one of the purposes of double taxation treaties and its role in the interpretation of the term ‟beneficial owner”. The analysis is based on the practice of the Supreme Administrative Court of Ukraine on interpretation of the relevant provisions of the Convention between the Government of Ukraine and the Government of Switzerland on Avoidance of Double Taxation with respect to Taxes on Income and Capital as of 30 October 2000.

  3. NHRIC (National Health Related Items Code)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Health Related Items Code (NHRIC) is a system for identification and numbering of marketed device packages that is compatible with other numbering...

  4. A code for simulation of human failure events in nuclear power plants: SIMPROC

    International Nuclear Information System (INIS)

    Gil, Jesus; Fernandez, Ivan; Murcia, Santiago; Gomez, Javier; Marrao, Hugo; Queral, Cesar; Exposito, Antonio; Rodriguez, Gabriel; Ibanez, Luisa; Hortal, Javier; Izquierdo, Jose M.; Sanchez, Miguel; Melendez, Enrique

    2011-01-01

    Over the past years, many Nuclear Power Plant organizations have performed Probabilistic Safety Assessments to identify and understand key plant vulnerabilities. As part of enhancing the PSA quality, the Human Reliability Analysis is essential to make a realistic evaluation of safety and about the potential facility's weaknesses. Moreover, it has to be noted that HRA continues to be a large source of uncertainty in the PSAs. Within their current joint collaborative activities, Indizen, Universidad Politecnica de Madrid and Consejo de Seguridad Nuclear have developed the so-called SIMulator of PROCedures (SIMPROC), a tool aiming at simulate events related with human actions and able to interact with a plant simulation model. The tool helps the analyst to quantify the importance of human actions in the final plant state. Among others, the main goal of SIMPROC is to check the Emergency Operating Procedures being used by operating crew in order to lead the plant to a safe shutdown plant state. Currently SIMPROC is coupled with the SCAIS software package, but the tool is flexible enough to be linked to other plant simulation codes. SIMPROC-SCAIS applications are shown in the present article to illustrate the tool performance. The applications were developed in the framework of the Nuclear Energy Agency project on Safety Margin Assessment and Applications (SM2A). First an introductory example was performed to obtain the damage domain boundary of a selected sequence from a SBLOCA. Secondly, the damage domain area of a selected sequence from a loss of Component Cooling Water with a subsequent seal LOCA was calculated. SIMPROC simulates the corresponding human actions in both cases. The results achieved shown how the system can be adapted to a wide range of purposes such as Dynamic Event Tree delineation, Emergency Operating Procedures and damage domain search.

  5. Forest Interpreter's Primer on Fire Management.

    Science.gov (United States)

    Zelker, Thomas M.

    Specifically prepared for the use of Forest Service field-based interpreters of the management, protection, and use of forest and range resources and the associated human, cultural, and natural history found on these lands, this book is the second in a series of six primers on the multiple use of forest and range resources. Following an…

  6. 42 CFR 93.107 - Rule of interpretation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Rule of interpretation. 93.107 Section 93.107 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH ASSESSMENTS AND HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON...

  7. The Human Behaviour-Change Project: harnessing the power of artificial intelligence and machine learning for evidence synthesis and interpretation.

    Science.gov (United States)

    Michie, Susan; Thomas, James; Johnston, Marie; Aonghusa, Pol Mac; Shawe-Taylor, John; Kelly, Michael P; Deleris, Léa A; Finnerty, Ailbhe N; Marques, Marta M; Norris, Emma; O'Mara-Eves, Alison; West, Robert

    2017-10-18

    Behaviour change is key to addressing both the challenges facing human health and wellbeing and to promoting the uptake of research findings in health policy and practice. We need to make better use of the vast amount of accumulating evidence from behaviour change intervention (BCI) evaluations and promote the uptake of that evidence into a wide range of contexts. The scale and complexity of the task of synthesising and interpreting this evidence, and increasing evidence timeliness and accessibility, will require increased computer support. The Human Behaviour-Change Project (HBCP) will use Artificial Intelligence and Machine Learning to (i) develop and evaluate a 'Knowledge System' that automatically extracts, synthesises and interprets findings from BCI evaluation reports to generate new insights about behaviour change and improve prediction of intervention effectiveness and (ii) allow users, such as practitioners, policy makers and researchers, to easily and efficiently query the system to get answers to variants of the question 'What works, compared with what, how well, with what exposure, with what behaviours (for how long), for whom, in what settings and why?'. The HBCP will: a) develop an ontology of BCI evaluations and their reports linking effect sizes for given target behaviours with intervention content and delivery and mechanisms of action, as moderated by exposure, populations and settings; b) develop and train an automated feature extraction system to annotate BCI evaluation reports using this ontology; c) develop and train machine learning and reasoning algorithms to use the annotated BCI evaluation reports to predict effect sizes for particular combinations of behaviours, interventions, populations and settings; d) build user and machine interfaces for interrogating and updating the knowledge base; and e) evaluate all the above in terms of performance and utility. The HBCP aims to revolutionise our ability to synthesise, interpret and deliver

  8. An Impact of Social Code of Conduct as a Determinant of Ethical Conduct on Human Resources Practices from the Aspect of Strategic Management

    Directory of Open Access Journals (Sweden)

    Harun Demirkaya

    2013-09-01

    Full Text Available This study analyzes the social code of conduct as a determinant of ethical conduct on human resources practices in terms of strategic management and investigates how effective the factor of social conduct is in the human resources managers’ decisions and actions. As a result, it has been seen that human resources managers have a positive attitude towards the employees having advanced social conduct and confirmed that the employees having advanced social conduct have an advantage over their peers in many organizational practices beginning from the recruitment process.

  9. Code of Federal Regulations Title 21

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database contains the most recent revision from the Government Printing Office (GPO) of the Code of Federal Regulations (CFR) Title 21 - Food and Drugs.

  10. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  11. Quantitative Interpretation of X-ray Absorption Near Structure Continuation Progress Report for 1st year 9/15/98-9/14/99

    International Nuclear Information System (INIS)

    Rehr, John J.; Bare, Simon; Stocht, Joachim

    1999-01-01

    OAK-B135 Quantitative Interpretation of X-ray Absorption Near Structure Continuation Progress Report for 1st year 9/15/98-9/14/99. This paper proposes to develop two industrial research collaborations to further develop the FEFF8 x-ray spectroscopy code to achieve a quantitative interpretation of x-ray absorption near edge structure (XANES) in materials of interest in energy research: (a) Quantitative interpretation of XANES for heterogeneous catalysts and disordered materials; and (b) quantitative interpretation of white-lines in XANES. The paper also outlines significant results achieved during the first Grant year

  12. A statistical framework to predict functional non-coding regions in the human genome through integrated analysis of annotation data.

    Science.gov (United States)

    Lu, Qiongshi; Hu, Yiming; Sun, Jiehuan; Cheng, Yuwei; Cheung, Kei-Hoi; Zhao, Hongyu

    2015-05-27

    Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu.

  13. Interpretation of the CABRI-RAFT RB1 and RB2 tests through detailed data evaluation and PAPAS-2S code analysis

    International Nuclear Information System (INIS)

    Fukano, Yoshitaka; Sato, Ikken

    2001-08-01

    The CABRI-RAFT RB1 and RB2 tests were aiming at a study on impact of fuel pin failure under an overpower condition leading to fuel melting. Using a special technique, combination of through-cladding failure and fuel melting was realized. In the RB1 test, fuel ejection was prevented under a limited fuel melting condition. On the other hand, significant fuel melting was applied in the RB2 test so as to get the fuel ejection, thereby obtaining information on the fuel ejection behavior. Interpretation for these tests through the detailed experimental data evaluation and the PAPAS-2S code analysis is performed in this study. Through this study, it is indicated that molten fuel ejection can be prevented with the low smear density fuel as far as the fuel melting is not large for a slit-type cladding defect. Fuel ejection becomes possible in the case of significant fuel melting with a very thin solid fuel shell surrounding the molten fuel cavity. However, the rapidness of the fuel ejection with the low smear density fuel is less pronounced compared with that of the high smear density fuel. It is also confirmed that there is considerable DN-precursor release into the coolant flow already before fuel ejection. The result is very useful for evaluation of anomaly detection with DN signal observation. (author)

  14. Promoter Analysis Reveals Globally Differential Regulation of Human Long Non-Coding RNA and Protein-Coding Genes

    KAUST Repository

    Alam, Tanvir; Medvedeva, Yulia A.; Jia, Hui; Brown, James B.; Lipovich, Leonard; Bajic, Vladimir B.

    2014-01-01

    raise the possibility that, given the historical reliance on protein-coding gene catalogs to define the chromatin states of active promoters, a revision of these chromatin signature profiles to incorporate expressed lncRNA genes is warranted

  15. When Content Matters: The Role of Processing Code in Tactile Display Design.

    Science.gov (United States)

    Ferris, Thomas K; Sarter, Nadine

    2010-01-01

    The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.

  16. Extending R packages to support 64-bit compiled code: An illustration with spam64 and GIMMS NDVI3g data

    Science.gov (United States)

    Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard

    2017-07-01

    Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.

  17. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    Science.gov (United States)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  18. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  19. Do domestic dogs interpret pointing as a command?

    Science.gov (United States)

    Scheider, Linda; Kaminski, Juliane; Call, Josep; Tomasello, Michael

    2013-05-01

    Domestic dogs comprehend human gestural communication flexibly, particularly the pointing gesture. Here, we examine whether dogs interpret pointing informatively, that is, as simply providing information, or rather as a command, for example, ordering them to move to a particular location. In the first study a human pointed toward an empty cup. In one manipulation, the dog either knew or did not know that the designated cup was empty (and that the other cup actually contained the food). In another manipulation, the human (as authority) either did or did not remain in the room after pointing. Dogs ignored the human's gesture if they had better information, irrespective of the authority's presence. In the second study, we varied the level of authority of the person pointing. Sometimes this person was an adult, and sometimes a young child. Dogs followed children's pointing just as frequently as they followed adults' pointing (and ignored the dishonest pointing of both), suggesting that the level of authority did not affect their behavior. Taken together these studies suggest that dogs do not see pointing as an imperative command ordering them to a particular location. It is still not totally clear, however, if they interpret it as informative or in some other way.

  20. Exploring item and higher order factor structure with the Schmid-Leiman solution: syntax codes for SPSS and SAS.

    Science.gov (United States)

    Wolff, Hans-Georg; Preising, Katja

    2005-02-01

    To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.

  1. MouSensor: A Versatile Genetic Platform to Create Super Sniffer Mice for Studying Human Odor Coding

    Directory of Open Access Journals (Sweden)

    Charlotte D’Hulst

    2016-07-01

    Full Text Available Typically, ∼0.1% of the total number of olfactory sensory neurons (OSNs in the main olfactory epithelium express the same odorant receptor (OR in a singular fashion and their axons coalesce into homotypic glomeruli in the olfactory bulb. Here, we have dramatically increased the total number of OSNs expressing specific cloned OR coding sequences by multimerizing a 21-bp sequence encompassing the predicted homeodomain binding site sequence, TAATGA, known to be essential in OR gene choice. Singular gene choice is maintained in these “MouSensors.” In vivo synaptopHluorin imaging of odor-induced responses by known M71 ligands shows functional glomerular activation in an M71 MouSensor. Moreover, a behavioral avoidance task demonstrates that specific odor detection thresholds are significantly decreased in multiple transgenic lines, expressing mouse or human ORs. We have developed a versatile platform to study gene choice and axon identity, to create biosensors with great translational potential, and to finally decode human olfaction.

  2. The rhesus macaque is three times as diverse but more closely equivalent in damaging coding variation as compared to the human

    Directory of Open Access Journals (Sweden)

    Yuan Qiaoping

    2012-06-01

    Full Text Available Abstract Background As a model organism in biomedicine, the rhesus macaque (Macaca mulatta is the most widely used nonhuman primate. Although a draft genome sequence was completed in 2007, there has been no systematic genome-wide comparison of genetic variation of this species to humans. Comparative analysis of functional and nonfunctional diversity in this highly abundant and adaptable non-human primate could inform its use as a model for human biology, and could reveal how variation in population history and size alters patterns and levels of sequence variation in primates. Results We sequenced the mRNA transcriptome and H3K4me3-marked DNA regions in hippocampus from 14 humans and 14 rhesus macaques. Using equivalent methodology and sampling spaces, we identified 462,802 macaque SNPs, most of which were novel and disproportionately located in the functionally important genomic regions we had targeted in the sequencing. At least one SNP was identified in each of 16,797 annotated macaque genes. Accuracy of macaque SNP identification was conservatively estimated to be >90%. Comparative analyses using SNPs equivalently identified in the two species revealed that rhesus macaque has approximately three times higher SNP density and average nucleotide diversity as compared to the human. Based on this level of diversity, the effective population size of the rhesus macaque is approximately 80,000 which contrasts with an effective population size of less than 10,000 for humans. Across five categories of genomic regions, intergenic regions had the highest SNP density and average nucleotide diversity and CDS (coding sequences the lowest, in both humans and macaques. Although there are more coding SNPs (cSNPs per individual in macaques than in humans, the ratio of dN/dS is significantly lower in the macaque. Furthermore, the number of damaging nonsynonymous cSNPs (have damaging effects on protein functions from PolyPhen-2 prediction in the macaque is more

  3. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  4. How do you interpret a billion primary care records?

    Directory of Open Access Journals (Sweden)

    Martin Heaven

    2017-04-01

    To establish this we explored just over 1 billion unique Read coded records generated in the time period 1999 to 2015 by GP practices participating in the provision of anonymised records to SAIL, aligning, filtering and summarising the data in a series of observational exercises to generate hypotheses related to the capture and recording of the data. Results A fascinating journey through 1 billion GP practice generated pieces of information, embarked upon to aid interpretation of our Supporting People results, and providing insights into the patterns of recording within GP data.

  5. Long non-coding RNAs: Mechanism of action and functional utility

    OpenAIRE

    Bhat, Shakil Ahmad; Ahmad, Syed Mudasir; Mumtaz, Peerzada Tajamul; Malik, Abrar Ahad; Dar, Mashooq Ahmad; Urwat, Uneeb; Shah, Riaz Ahmad; Ganai, Nazir Ahmad

    2016-01-01

    Recent RNA sequencing studies have revealed that most of the human genome is transcribed, but very little of the total transcriptomes has the ability to encode proteins. Long non-coding RNAs (lncRNAs) are non-coding transcripts longer than 200 nucleotides. Members of the non-coding genome include microRNA (miRNA), small regulatory RNAs and other short RNAs. Most of long non-coding RNA (lncRNAs) are poorly annotated. Recent recognition about lncRNAs highlights their effects in many biological ...

  6. Cloning and expression of a cDNA coding for a human monocyte-derived plasminogen activator inhibitor

    International Nuclear Information System (INIS)

    Antalis, T.M.; Clark, M.A.; Barnes, T.; Lehrbach, P.R.; Devine, P.L.; Schevzov, G.; Goss, N.H.; Stephens, R.W.; Tolstoshev, P.

    1988-01-01

    Human monocyte-derived plasminogen activator inhibitor (mPAI-2) was purified to homogeneity from the U937 cell line and partially sequenced. Oligonucleotide probes derived from this sequence were used to screen a cDNA library prepared from U937 cells. One positive clone was sequenced and contained most of the coding sequence as well as a long incomplete 3' untranslated region (1112 base pairs). This cDNA sequence was shown to encode mPAI-2 by hybrid-select translation. A cDNA clone encoding the remainder of the mPAI-2 mRNA was obtained by primer extension of U937 poly(A) + RNA using a probe complementary to the mPAI-2 coding region. The coding sequence for mPAI-2 was placed under the control of the λ P/sub L/ promoter, and the protein expressed in Escherichia coli formed a complex with urokinase that could be detected immunologically. By nucleotide sequence analysis, mPAI-2 cDNA encodes a protein containing 415 amino acids with a predicted unglycosylated M/sub r/ of 46,543. The predicted amino acid sequence of mPAI-2 is very similar to placental PAI-2 and shows extensive homology with members of the serine protease inhibitor (serpin) superfamily. mPAI-2 was found to be more homologous to ovalbumin (37%) than the endothelial plasminogen activator inhibitor, PAI-1 (26%). The 3' untranslated region of the mPAI-2 cDNA contains a putative regulatory sequence that has been associated with the inflammatory mediators

  7. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Deontological aspects of the nursing profession: understanding the code of ethics

    Directory of Open Access Journals (Sweden)

    Terezinha Nunes da Silva

    Full Text Available ABSTRACT Objective: to investigate nursing professionals' understanding concerning the Code of Ethics; to assess the relevance of the Code of Ethics of the nursing profession and its use in practice; to identify how problem-solving is performed when facing ethical dilemmas in professional practice. Method: exploratory descriptive study, conducted with 34 (thirty-four nursing professionals from a teaching hospital in João Pessoa, PB - Brazil. Results: four thematic categories emerged: conception of professional ethics in nursing practice; interpretations of ethics in the practice of care; use of the Code of Ethics in the professional practice; strategies for solving ethical issues in the professional practice. Final considerations: some of the nursing professionals comprehend the meaning coherently; others have a limited comprehension, based on jargon. Therefore, a deeper understanding of the text contained in this code is necessary so that it can be applied into practice, aiming to provide a quality care that is, above all, ethical and legal.

  9. Deontological aspects of the nursing profession: understanding the code of ethics.

    Science.gov (United States)

    Silva, Terezinha Nunes da; Freire, Maria Eliane Moreira; Vasconcelos, Monica Ferreira de; Silva Junior, Sergio Vital da; Silva, Wilton José de Carvalho; Araújo, Patrícia da Silva; Eloy, Allan Victor Assis

    2018-01-01

    to investigate nursing professionals' understanding concerning the Code of Ethics; to assess the relevance of the Code of Ethics of the nursing profession and its use in practice; to identify how problem-solving is performed when facing ethical dilemmas in professional practice. exploratory descriptive study, conducted with 34 (thirty-four) nursing professionals from a teaching hospital in João Pessoa, PB - Brazil. four thematic categories emerged: conception of professional ethics in nursing practice; interpretations of ethics in the practice of care; use of the Code of Ethics in the professional practice; strategies for solving ethical issues in the professional practice. some of the nursing professionals comprehend the meaning coherently; others have a limited comprehension, based on jargon. Therefore, a deeper understanding of the text contained in this code is necessary so that it can be applied into practice, aiming to provide a quality care that is, above all, ethical and legal.

  10. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Marchand, Olivier; Georgenthum, Vincent; Petit, Marc; Udagawa, Yutaka; Nagase, Fumihisa; Sugiyama, Tomoyuki; Arffman, Asko; Cherubini, Marco; Dostal, Martin; Klouzal, Jan; Geelhood, Kenneth; Gorzel, Andreas; Holt, Lars; Jernkvist, Lars Olof; Khvostov, Grigori; Maertens, Dietmar; Spykman, Gerold; Nakajima, Tetsuo; Nechaeva, Olga; Panka, Istvan; Rey Gayo, Jose M.; Sagrado Garcia, Inmaculada C.; Shin, An-Dong; Sonnenburg, Heinz Guenther; Umidova, Zeynab; Zhang, Jinzhao; Voglewede, John

    2013-01-01

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  11. The small RNA content of human sperm reveals pseudogene-derived piRNAs complementary to protein-coding genes

    Science.gov (United States)

    Pantano, Lorena; Jodar, Meritxell; Bak, Mads; Ballescà, Josep Lluís; Tommerup, Niels; Oliva, Rafael; Vavouri, Tanya

    2015-01-01

    At the end of mammalian sperm development, sperm cells expel most of their cytoplasm and dispose of the majority of their RNA. Yet, hundreds of RNA molecules remain in mature sperm. The biological significance of the vast majority of these molecules is unclear. To better understand the processes that generate sperm small RNAs and what roles they may have, we sequenced and characterized the small RNA content of sperm samples from two human fertile individuals. We detected 182 microRNAs, some of which are highly abundant. The most abundant microRNA in sperm is miR-1246 with predicted targets among sperm-specific genes. The most abundant class of small noncoding RNAs in sperm are PIWI-interacting RNAs (piRNAs). Surprisingly, we found that human sperm cells contain piRNAs processed from pseudogenes. Clusters of piRNAs from human testes contain pseudogenes transcribed in the antisense strand and processed into small RNAs. Several human protein-coding genes contain antisense predicted targets of pseudogene-derived piRNAs in the male germline and these piRNAs are still found in mature sperm. Our study provides the most extensive data set and annotation of human sperm small RNAs to date and is a resource for further functional studies on the roles of sperm small RNAs. In addition, we propose that some of the pseudogene-derived human piRNAs may regulate expression of their parent gene in the male germline. PMID:25904136

  12. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  13. Continuous Materiality: Through a Hierarchy of Computational Codes

    Directory of Open Access Journals (Sweden)

    Jichen Zhu

    2008-01-01

    Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.

  14. The "Wow! signal" of the terrestrial genetic code

    Science.gov (United States)

    shCherbak, Vladimir I.; Makukov, Maxim A.

    2013-05-01

    embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to pass non-biological information.

  15. De novo assembly of a haplotype-resolved human genome.

    Science.gov (United States)

    Cao, Hongzhi; Wu, Honglong; Luo, Ruibang; Huang, Shujia; Sun, Yuhui; Tong, Xin; Xie, Yinlong; Liu, Binghang; Yang, Hailong; Zheng, Hancheng; Li, Jian; Li, Bo; Wang, Yu; Yang, Fang; Sun, Peng; Liu, Siyang; Gao, Peng; Huang, Haodong; Sun, Jing; Chen, Dan; He, Guangzhu; Huang, Weihua; Huang, Zheng; Li, Yue; Tellier, Laurent C A M; Liu, Xiao; Feng, Qiang; Xu, Xun; Zhang, Xiuqing; Bolund, Lars; Krogh, Anders; Kristiansen, Karsten; Drmanac, Radoje; Drmanac, Snezana; Nielsen, Rasmus; Li, Songgang; Wang, Jian; Yang, Huanming; Li, Yingrui; Wong, Gane Ka-Shu; Wang, Jun

    2015-06-01

    The human genome is diploid, and knowledge of the variants on each chromosome is important for the interpretation of genomic information. Here we report the assembly of a haplotype-resolved diploid genome without using a reference genome. Our pipeline relies on fosmid pooling together with whole-genome shotgun strategies, based solely on next-generation sequencing and hierarchical assembly methods. We applied our sequencing method to the genome of an Asian individual and generated a 5.15-Gb assembled genome with a haplotype N50 of 484 kb. Our analysis identified previously undetected indels and 7.49 Mb of novel coding sequences that could not be aligned to the human reference genome, which include at least six predicted genes. This haplotype-resolved genome represents the most complete de novo human genome assembly to date. Application of our approach to identify individual haplotype differences should aid in translating genotypes to phenotypes for the development of personalized medicine.

  16. Mirage, a food chain transfer and dosimetric impact code in relation with atmospheric and liquid dispersion codes

    International Nuclear Information System (INIS)

    Van Dorpe, F.; Jourdain, F.

    2006-01-01

    Full text: The numerical code M.I.R.A.G.E. (Module of Radiological impact calculations on the Environment due to accidental or chronic nuclear releases through Aqueous and Gas media) has been developed to simulate the radionuclides transfer in the biosphere and food chains, as well as the dosimetric impact on man, after accidental or chronic releases in the environment by nuclear installations. The originality of M.I.R.A.G.E. is to propose a single tool chained downstream with various atmospheric and liquid dispersion codes. The code M.I.R.A.G.E. is a series of modules which makes it possible to carry out evaluations on the transfers in food chains and human dose impact. Currently, M.I.R.A.G.E. is chained with a Gaussian atmospheric dispersion code H.A.R.M.A.T.T.A.N. (Cea), a 3 D atmospheric dispersion code with Lagrangian model named M.I.N.E.R.V.E.-S.P.R.A.Y. (Aria Technology) and a 3 D groundwater transfer code named M.A.R.T.H.E. (B.R.G.M.). M.I.R.A.G.E. uses concentration or activity result files as initial data input for its calculations. The application initially calculates the concentrations in the various compartments of the environment (soils, plants, animals). The results are given in the shape of concentration and dose maps and also on a particular place called a reference group for dosimetric impact (like a village or a specific population group located around a nuclear installation). The input and output data of M.I.R.A.G.E. can have geographic coordinates and thus readable by a G.I.S. M.I.R.A.G. E.is an opened system with which it is easy to chain other codes of dispersion that those currently used. The calculations uncoupled with dispersion calculations are also possible by manual seizure of the dispersion data (contamination of a tablecloth, particular value in a point, etc.). M.I.R.A.G.E. takes into account soil deposits and resuspension phenomenon, transfers in plants and animals (choice of agricultural parameters, types of plants and animals, etc

  17. TUG1: a pivotal oncogenic long non-coding RNA of human cancers.

    Science.gov (United States)

    Li, Zheng; Shen, Jianxiong; Chan, Matthew T V; Wu, William Ka Kei

    2016-08-01

    Long non-coding RNAs (lncRNAs) are a group greater than 200 nucleotides in length. An increasing number of studies has shown that lncRNAs play important roles in diverse cellular processes, including proliferation, differentiation, apoptosis, invasion and chromatin remodelling. In this regard, deregulation of lncRNAs has been documented in human cancers. TUG1 is a recently identified oncogenic lncRNA whose aberrant upregulation has been detected in different types of cancer, including B-cell malignancies, oesophageal squamous cell carcinoma, bladder cancer, hepatocellular carcinoma and osteosarcoma. In these malignancies, knock-down of TUG1 has been shown to suppress cell proliferation, invasion and/or colony formation. Interestingly, TUG1 has been found to be downregulated in non-small cell lung carcinoma, indicative of its tissue-specific function in tumourigenesis. Pertinent to clinical practice, TUG1 may act as a prognostic biomarker for tumours. In this review, we summarize current knowledge concerning the role of TUG1 in tumour progression and discuss mechanisms associated with it. © 2016 John Wiley & Sons Ltd.

  18. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  19. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  20. Ozone exposure and pulmonary effects in panel and human clinical studies: Considerations for design and interpretation.

    Science.gov (United States)

    Rohr, Annette C

    2018-04-01

    A wealth of literature exists regarding the pulmonary effects of ozone, a photochemical pollutant produced by the reaction of nitrogen oxide and volatile organic precursors in the presence of sunlight. This paper focuses on epidemiological panel studies and human clinical studies of ozone exposure, and discusses issues specific to this pollutant that may influence study design and interpretation as well as other, broader considerations relevant to ozone-health research. The issues are discussed using examples drawn from the wider literature. The recent panel and clinical literature is also reviewed. Health outcomes considered include lung function, symptoms, and pulmonary inflammation. Issues discussed include adversity, reversibility, adaptation, variability in ozone exposure metric used and health outcomes evaluated, co-pollutants in panel studies, influence of temperature in panel studies, and multiple comparisons. Improvements in and standardization of panel study approaches are recommended to facilitate comparisons between studies as well as meta-analyses. Additional clinical studies at or near the current National Ambient Air Quality Standard (NAAQS) of 70 ppb are recommended, as are clinical studies in sensitive subpopulations such as asthmatics. The pulmonary health impacts of ozone exposure have been well documented using both epidemiological and chamber study designs. However, there are a number of specific methodological and related issues that should be considered when interpreting the results of these studies and planning additional research, including the standardization of exposure and health metrics to facilitate comparisons among studies.

  1. NASA space radiation transport code development consortium

    International Nuclear Information System (INIS)

    Townsend, L. W.

    2005-01-01

    Recently, NASA established a consortium involving the Univ. of Tennessee (lead institution), the Univ. of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking. (authors)

  2. Can bilingual two-year-olds code-switch?

    Science.gov (United States)

    Lanza, E

    1992-10-01

    Sociolinguists have investigated language mixing as code-switching in the speech of bilingual children three years old and older. Language mixing by bilingual two-year-olds, however, has generally been interpreted in the child language literature as a sign of the child's lack of language differentiation. The present study applies perspectives from sociolinguistics to investigate the language mixing of a bilingual two-year-old acquiring Norwegian and English simultaneously in Norway. Monthly recordings of the child's spontaneous speech in interactions with her parents were made from the age of 2;0 to 2;7. An investigation into the formal aspects of the child's mixing and the context of the mixing reveals that she does differentiate her language use in contextually sensitive ways, hence that she can code-switch. This investigation stresses the need to examine more carefully the roles of dominance and context in the language mixing of young bilingual children.

  3. Speech rhythms and multiplexed oscillatory sensory coding in the human brain.

    Directory of Open Access Journals (Sweden)

    Joachim Gross

    2013-12-01

    Full Text Available Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta and the amplitude of high-frequency (gamma oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations.

  4. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    Science.gov (United States)

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  5. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    Science.gov (United States)

    Jones, Scott M.

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines. OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial

  6. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  7. From Cool Cash to Coded Chaos

    DEFF Research Database (Denmark)

    Rennison, Betina Wolfgang

    of management differently. In this chaos of codes the managerial challenge is to take a second order position in order to strategically manage the communication that manages management itself. Key words: Management; personnel management; human-relations; pay-system; communication; system-theory; discursive...... of Denmark (called New Wage), this paper theorizes this complexity in terms of Niklas Luhmann's systems theory. It identifies four wholly different `codes' of communication: legal, economic, pedagogical and intimate. Each of them shapes the phenomena of `pay', the construal of the employee and the form...

  8. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  9. Hearing the voices of service user researchers in collaborative qualitative data analysis: the case for multiple coding.

    Science.gov (United States)

    Sweeney, Angela; Greenwood, Kathryn E; Williams, Sally; Wykes, Til; Rose, Diana S

    2013-12-01

    Health research is frequently conducted in multi-disciplinary teams, with these teams increasingly including service user researchers. Whilst it is common for service user researchers to be involved in data collection--most typically interviewing other service users--it is less common for service user researchers to be involved in data analysis and interpretation. This means that a unique and significant perspective on the data is absent. This study aims to use an empirical report of a study on Cognitive Behavioural Therapy for psychosis (CBTp) to demonstrate the value of multiple coding in enabling service users voices to be heard in team-based qualitative data analysis. The CBTp study employed multiple coding to analyse service users' discussions of CBT for psychosis (CBTp) from the perspectives of a service user researcher, clinical researcher and psychology assistant. Multiple coding was selected to enable multiple perspectives to analyse and interpret data, to understand and explore differences and to build multi-disciplinary consensus. Multiple coding enabled the team to understand where our views were commensurate and incommensurate and to discuss and debate differences. Through the process of multiple coding, we were able to build strong consensus about the data from multiple perspectives, including that of the service user researcher. Multiple coding is an important method for understanding and exploring multiple perspectives on data and building team consensus. This can be contrasted with inter-rater reliability which is only appropriate in limited circumstances. We conclude that multiple coding is an appropriate and important means of hearing service users' voices in qualitative data analysis. © 2012 John Wiley & Sons Ltd.

  10. Effect Coding as a Mechanism for Improving the Accuracy of Measuring Students Who Self-Identify with More than One Race

    Science.gov (United States)

    Mayhew, Matthew J.; Simonoff, Jeffrey S.

    2015-01-01

    The purpose of this paper is to describe effect coding as an alternative quantitative practice for analyzing and interpreting categorical, multi-raced independent variables in higher education research. Not only may effect coding enable researchers to get closer to respondents' original intentions, it allows for more accurate analyses of all race…

  11. Interpretive schemata of human resource management during economic crisis: Case of producers for automotive industry

    Directory of Open Access Journals (Sweden)

    Ana Arzenšek

    2010-05-01

    Full Text Available This qualitative research investigates interpretive schemata by producers for automotive industry during the economic crisis in Slovenia. Specifically, the interest was in their Human resource management (HRM schemata in current crisis. We explained the dynamics of schema change on the basis of Piaget's theory of adaptation. In-depth interviews with CEOs, directors of HRM and leaders of trade unions served as a primary data source. In addition, comparative analysis of social responsibility as reported in companies' annual reports in 2007 and 2008 was made. Firstly, results demonstrate strategic role of HRM in chosen companies. Secondly, present economic crisis does not serve as a factor of schema change. In conclusion, participants mostly assimilate new information from environment to fit their HRM schemata. Results show the major factor for both assimilation and lack of schema change is occurrence of crisis in Slovenian companies that produce for automotive industry in the nineties.

  12. Long non-coding RNAs: Mechanism of action and functional utility

    Directory of Open Access Journals (Sweden)

    Shakil Ahmad Bhat

    2016-10-01

    Full Text Available Recent RNA sequencing studies have revealed that most of the human genome is transcribed, but very little of the total transcriptomes has the ability to encode proteins. Long non-coding RNAs (lncRNAs are non-coding transcripts longer than 200 nucleotides. Members of the non-coding genome include microRNA (miRNA, small regulatory RNAs and other short RNAs. Most of long non-coding RNA (lncRNAs are poorly annotated. Recent recognition about lncRNAs highlights their effects in many biological and pathological processes. LncRNAs are dysfunctional in a variety of human diseases varying from cancerous to non-cancerous diseases. Characterization of these lncRNA genes and their modes of action may allow their use for diagnosis, monitoring of progression and targeted therapies in various diseases. In this review, we summarize the functional perspectives as well as the mechanism of action of lncRNAs. Keywords: LncRNA, X-chromosome inactivation, Genome imprinting, Transcription regulation, Cancer, Immunity

  13. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  14. Retrotransposons and non-protein coding RNAs

    DEFF Research Database (Denmark)

    Mourier, Tobias; Willerslev, Eske

    2009-01-01

    does not merely represent spurious transcription. We review examples of functional RNAs transcribed from retrotransposons, and address the collection of non-protein coding RNAs derived from transposable element sequences, including numerous human microRNAs and the neuronal BC RNAs. Finally, we review...

  15. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  16. Cloning and expression of a cDNA coding for a human monocyte-derived plasminogen activator inhibitor.

    Science.gov (United States)

    Antalis, T M; Clark, M A; Barnes, T; Lehrbach, P R; Devine, P L; Schevzov, G; Goss, N H; Stephens, R W; Tolstoshev, P

    1988-02-01

    Human monocyte-derived plasminogen activator inhibitor (mPAI-2) was purified to homogeneity from the U937 cell line and partially sequenced. Oligonucleotide probes derived from this sequence were used to screen a cDNA library prepared from U937 cells. One positive clone was sequenced and contained most of the coding sequence as well as a long incomplete 3' untranslated region (1112 base pairs). This cDNA sequence was shown to encode mPAI-2 by hybrid-select translation. A cDNA clone encoding the remainder of the mPAI-2 mRNA was obtained by primer extension of U937 poly(A)+ RNA using a probe complementary to the mPAI-2 coding region. The coding sequence for mPAI-2 was placed under the control of the lambda PL promoter, and the protein expressed in Escherichia coli formed a complex with urokinase that could be detected immunologically. By nucleotide sequence analysis, mPAI-2 cDNA encodes a protein containing 415 amino acids with a predicted unglycosylated Mr of 46,543. The predicted amino acid sequence of mPAI-2 is very similar to placental PAI-2 (3 amino acid differences) and shows extensive homology with members of the serine protease inhibitor (serpin) superfamily. mPAI-2 was found to be more homologous to ovalbumin (37%) than the endothelial plasminogen activator inhibitor, PAI-1 (26%). Like ovalbumin, mPAI-2 appears to have no typical amino-terminal signal sequence. The 3' untranslated region of the mPAI-2 cDNA contains a putative regulatory sequence that has been associated with the inflammatory mediators.

  17. A two dimensional code (R,Z) for nuclear reactor analysis and its application to the UAR-RI reactor

    International Nuclear Information System (INIS)

    Bishay, S.T.; Mikhail, I.F.I.; Gaafar, M.A.; Michaiel, M.L.; Nassar, S.F.

    1988-01-01

    A detailed study is given of fuel consumption in completely reflected cylindrical reactors. A two group, two dimensional (r,z) code is developed to carry out the required procedure. The code is applied to the UAR-RI reactor and the results are found to be in complete agreement with the experimental observations and with the theoretical interpretations. 7 fig., 12 tab

  18. An object-oriented framework for magnetic-fusion modeling and analysis codes

    International Nuclear Information System (INIS)

    Cohen, R H; Yang, T Y Brian.

    1999-01-01

    The magnetic-fusion energy (MFE) program, like many other scientific and engineering activities, has a need to efficiently develop complex modeling codes which combine detailed models of components to make an integrated model of a device, as well as a rich supply of legacy code that could provide the component models. There is also growing recognition in many technical fields of the desirability of steerable software: computer programs whose functionality can be changed by the user as it is run. This project had as its goals the development of two key pieces of infrastructure that are needed to combine existing code modules, written mainly in Fortran, into flexible, steerable, object-oriented integrated modeling codes for magnetic- fusion applications. These two pieces are (1) a set of tools to facilitate the interfacing of Fortran code with a steerable object-oriented framework (which we have chosen to be based on PythonlW3, an object-oriented interpreted language), and (2) a skeleton for the integrated modeling code which defines the relationships between the modules. The first of these activities obviously has immediate applicability to a spectrum of projects; the second is more focussed on the MFE application, but may be of value as an example for other applications

  19. Application of heterogeneous method for the interpretation of exponential experiments

    International Nuclear Information System (INIS)

    Birkhoff, G.; Bondar, L.

    1977-01-01

    The present paper gives a brief review of a work which was executed mainly during 1967 and 1968 in the field of the application of heterogeneous methods for the interpretation of exponential experiments with ORGEL type lattices (lattices of natural uranium cluster elements with organic coolants moderated by heavy water). In the frame of this work a heterogeneous computer program, in (r,γ) geometry was written which is based on the NORDHEIM method using a uniform moderator, three energy groups and monopol and dipol sources. This code is especially adapted for regular square lattices in a cylindrical tank. Full use of lattice symmetry was made for reducing the numerical job of the theory. A further reduction was obtained by introducing a group averaged extrapolation distance at the external boundary. Channel parameters were evaluated by the PINOCCHIO code. Comparisons of calculated and measured thermal neutron flux showed good agreement. Equivalence of heterogeneous and homogeneous theory was found in cases of lattices comprising a minimum of 32, 24 and 16 fuel elements for respectively under-, well-, and over-moderated lattices. Heterogeneous calculations of high leakage lattices suffered the lack of good methods for the computation of axial and radial streaming parameters. Interpretation of buckling measurements in the subcritical facility EXPO requires already more accurate evaluation of the streaming effects than we made. The potential of heterogeneous theory in the field of exponential experiments is thought to be limited by the precision by which the streaming parameters can be calculated

  20. [Bioethical analysis of the Brazilian Dentistry Code of Ethics].

    Science.gov (United States)

    Pyrrho, Monique; do Prado, Mauro Machado; Cordón, Jorge; Garrafa, Volnei

    2009-01-01

    The Brazilian Dentistry Code of Ethics (DCE), Resolution CFO-71 from May 2006, is an instrument created to guide dentists' behavior in relation to the ethical aspects of professional practice. The purpose of the study is to analyze the above mentioned code comparing the deontological and bioethical focuses. In order to do so, an interpretative analysis of the code and of twelve selected texts was made. Six of the texts were about bioethics and six on deontology, and the analysis was made through the methodological classification of the context units, textual paragraphs and items from the code in the following categories: the referentials of bioethical principlism--autonomy, beneficence, nonmaleficence and justice -, technical aspects and moral virtues related to the profession. Together the four principles represented 22.9%, 39.8% and 54.2% of the content of the DCE, of the deontological texts and of the bioethical texts respectively. In the DCE, 42% of the items referred to virtues, 40.2% were associated to technical aspects and just 22.9% referred to principles. The virtues related to the professionals and the technical aspects together amounted to 70.1% of the code. Instead of focusing on the patient as the subject of the process of oral health care, the DCE focuses on the professional, and it is predominantly turned to legalistic and corporate aspects.

  1. AlleleCoder: a PERL script for coding codominant polymorphism data for PCA analysis

    Science.gov (United States)

    A useful biological interpretation of diploid heterozygotes is in terms of the dose of the common allele (0, 1 or 2 copies). We have developed a PERL script that converts FASTA files into coded spreadsheets suitable for Principal Component Analysis (PCA). In combination with R and R Commander, two- ...

  2. [The interpretation and integration of traditional Chinese phytotherapy into Western-type medicine with the possession of knowledge of the human genome].

    Science.gov (United States)

    Blázovics, Anna

    2018-05-01

    The terminology of traditional Chinese medicine (TCM) is hardly interpretable in the context of human genome, therefore the human genome program attracted attention towards the Western practice of medicine in China. In the last two decades, several important steps could be observed in China in relation to the approach of traditional Chinese and Western medicine. The Chinese government supports the realization of information databases for research in order to clarify the molecular biology level to detect associations between gene expression signal transduction pathways and protein-protein interactions, and the effects of bioactive components of Chinese drugs and their effectiveness. The values of TCM are becoming more and more important for Western medicine as well, because molecular biological therapies did not redeem themselves, e.g., in tumor therapy. Orv Hetil. 2018; 159(18): 696-702.

  3. MouSensor: A Versatile Genetic Platform to Create Super Sniffer Mice for Studying Human Odor Coding.

    Science.gov (United States)

    D'Hulst, Charlotte; Mina, Raena B; Gershon, Zachary; Jamet, Sophie; Cerullo, Antonio; Tomoiaga, Delia; Bai, Li; Belluscio, Leonardo; Rogers, Matthew E; Sirotin, Yevgeniy; Feinstein, Paul

    2016-07-26

    Typically, ∼0.1% of the total number of olfactory sensory neurons (OSNs) in the main olfactory epithelium express the same odorant receptor (OR) in a singular fashion and their axons coalesce into homotypic glomeruli in the olfactory bulb. Here, we have dramatically increased the total number of OSNs expressing specific cloned OR coding sequences by multimerizing a 21-bp sequence encompassing the predicted homeodomain binding site sequence, TAATGA, known to be essential in OR gene choice. Singular gene choice is maintained in these "MouSensors." In vivo synaptopHluorin imaging of odor-induced responses by known M71 ligands shows functional glomerular activation in an M71 MouSensor. Moreover, a behavioral avoidance task demonstrates that specific odor detection thresholds are significantly decreased in multiple transgenic lines, expressing mouse or human ORs. We have developed a versatile platform to study gene choice and axon identity, to create biosensors with great translational potential, and to finally decode human olfaction. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  5. Practicing the Code of Ethics, finding the image of God.

    Science.gov (United States)

    Hoglund, Barbara A

    2013-01-01

    The Code of Ethics for Nurses gives a professional obligation to practice in a compassionate and respectful way that is unaffected by the attributes of the patient. This article explores the concept "made in the image of God" and the complexities inherent in caring for those perceived as exhibiting distorted images of God. While the Code provides a professional standard consistent with a biblical worldview, human nature impacts the ability to consistently act congruently with the Code. Strategies and nursing interventions that support development of practice from a biblical worldview and the Code of Ethics for Nurses are presented.

  6. Interpretative commenting.

    Science.gov (United States)

    Vasikaran, Samuel

    2008-08-01

    * Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.

  7. Evaluating the benefits of commercial building energy codes and improving federal incentives for code adoption.

    Science.gov (United States)

    Gilbraith, Nathaniel; Azevedo, Inês L; Jaramillo, Paulina

    2014-12-16

    The federal government has the goal of decreasing commercial building energy consumption and pollutant emissions by incentivizing the adoption of commercial building energy codes. Quantitative estimates of code benefits at the state level that can inform the size and allocation of these incentives are not available. We estimate the state-level climate, environmental, and health benefits (i.e., social benefits) and reductions in energy bills (private benefits) of a more stringent code (ASHRAE 90.1-2010) relative to a baseline code (ASHRAE 90.1-2007). We find that reductions in site energy use intensity range from 93 MJ/m(2) of new construction per year (California) to 270 MJ/m(2) of new construction per year (North Dakota). Total annual benefits from more stringent codes total $506 million for all states, where $372 million are from reductions in energy bills, and $134 million are from social benefits. These total benefits range from $0.6 million in Wyoming to $49 million in Texas. Private benefits range from $0.38 per square meter in Washington State to $1.06 per square meter in New Hampshire. Social benefits range from $0.2 per square meter annually in California to $2.5 per square meter in Ohio. Reductions in human/environmental damages and future climate damages account for nearly equal shares of social benefits.

  8. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  9. What Do Letter Migration Errors Reveal About Letter Position Coding in Visual Word Recognition?

    Science.gov (United States)

    Davis, Colin J.; Bowers, Jeffrey S.

    2004-01-01

    Dividing attention across multiple words occasionally results in misidentifications whereby letters apparently migrate between words. Previous studies have found that letter migrations preserve within-word letter position, which has been interpreted as support for position-specific letter coding. To investigate this issue, the authors used word…

  10. Phenomenological optical potentials and optical model computer codes

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    An introduction to the Optical Model is presented. Starting with the purpose and nature of the physical problems to be analyzed, a general formulation and the various phenomenological methods of solution are discussed. This includes the calculation of observables based on assumed potentials such as local and non-local and their forms, e.g. Woods-Saxon, folded model etc. Also discussed are the various calculational methods and model codes employed to describe nuclear reactions in the spherical and deformed regions (e.g. coupled-channel analysis). An examination of the numerical solutions and minimization techniques associated with the various codes, is briefly touched upon. Several computer programs are described for carrying out the calculations. The preparation of input, (formats and options), determination of model parameters and analysis of output are described. The class is given a series of problems to carry out using the available computer. Interpretation and evaluation of the samples includes the effect of varying parameters, and comparison of calculations with the experimental data. Also included is an intercomparison of the results from the various model codes, along with their advantages and limitations. (author)

  11. Description of codes and models to be used in risk assessment

    International Nuclear Information System (INIS)

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab

  12. Human-specific protein isoforms produced by novel splice sites in the human genome after the human-chimpanzee divergence

    Directory of Open Access Journals (Sweden)

    Kim Dong Seon

    2012-11-01

    Full Text Available Abstract Background Evolution of splice sites is a well-known phenomenon that results in transcript diversity during human evolution. Many novel splice sites are derived from repetitive elements and may not contribute to protein products. Here, we analyzed annotated human protein-coding exons and identified human-specific splice sites that arose after the human-chimpanzee divergence. Results We analyzed multiple alignments of the annotated human protein-coding exons and their respective orthologous mammalian genome sequences to identify 85 novel splice sites (50 splice acceptors and 35 donors in the human genome. The novel protein-coding exons, which are expressed either constitutively or alternatively, produce novel protein isoforms by insertion, deletion, or frameshift. We found three cases in which the human-specific isoform conferred novel molecular function in the human cells: the human-specific IMUP protein isoform induces apoptosis of the trophoblast and is implicated in pre-eclampsia; the intronization of a part of SMOX gene exon produces inactive spermine oxidase; the human-specific NUB1 isoform shows reduced interaction with ubiquitin-like proteins, possibly affecting ubiquitin pathways. Conclusions Although the generation of novel protein isoforms does not equate to adaptive evolution, we propose that these cases are useful candidates for a molecular functional study to identify proteomic changes that might bring about novel phenotypes during human evolution.

  13. 21 CFR 19.6 - Code of ethics for government service.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Code of ethics for government service. 19.6 Section 19.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL STANDARDS OF CONDUCT AND CONFLICTS OF INTEREST General Provisions § 19.6 Code of ethics for government...

  14. High-throughput interpretation of gene structure changes in human and nonhuman resequencing data, using ACE.

    Science.gov (United States)

    Majoros, William H; Campbell, Michael S; Holt, Carson; DeNardo, Erin K; Ware, Doreen; Allen, Andrew S; Yandell, Mark; Reddy, Timothy E

    2017-05-15

    The accurate interpretation of genetic variants is critical for characterizing genotype-phenotype associations. Because the effects of genetic variants can depend strongly on their local genomic context, accurate genome annotations are essential. Furthermore, as some variants have the potential to disrupt or alter gene structure, variant interpretation efforts stand to gain from the use of individualized annotations that account for differences in gene structure between individuals or strains. We describe a suite of software tools for identifying possible functional changes in gene structure that may result from sequence variants. ACE ('Assessing Changes to Exons') converts phased genotype calls to a collection of explicit haplotype sequences, maps transcript annotations onto them, detects gene-structure changes and their possible repercussions, and identifies several classes of possible loss of function. Novel transcripts predicted by ACE are commonly supported by spliced RNA-seq reads, and can be used to improve read alignment and transcript quantification when an individual-specific genome sequence is available. Using publicly available RNA-seq data, we show that ACE predictions confirm earlier results regarding the quantitative effects of nonsense-mediated decay, and we show that predicted loss-of-function events are highly concordant with patterns of intolerance to mutations across the human population. ACE can be readily applied to diverse species including animals and plants, making it a broadly useful tool for use in eukaryotic population-based resequencing projects, particularly for assessing the joint impact of all variants at a locus. ACE is written in open-source C ++ and Perl and is available from geneprediction.org/ACE. myandell@genetics.utah.edu or tim.reddy@duke.edu. Supplementary information is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e

  15. Catalog of Differentially Expressed Long Non-Coding RNA following Activation of Human and Mouse Innate Immune Response

    Directory of Open Access Journals (Sweden)

    Benoit T. Roux

    2017-08-01

    Full Text Available Despite increasing evidence to indicate that long non-coding RNAs (lncRNAs are novel regulators of immunity, there has been no systematic attempt to identify and characterize the lncRNAs whose expression is changed following the induction of the innate immune response. To address this issue, we have employed next-generation sequencing data to determine the changes in the lncRNA profile in four human (monocytes, macrophages, epithelium, and chondrocytes and four mouse cell types (RAW 264.7 macrophages, bone marrow-derived macrophages, peritoneal macrophages, and splenic dendritic cells following exposure to the pro-inflammatory mediators, lipopolysaccharides (LPS, or interleukin-1β. We show differential expression of 204 human and 210 mouse lncRNAs, with positional analysis demonstrating correlation with immune-related genes. These lncRNAs are predominantly cell-type specific, composed of large regions of repeat sequences, and show poor evolutionary conservation. Comparison within the human and mouse sequences showed less than 1% sequence conservation, although we identified multiple conserved motifs. Of the 204 human lncRNAs, 21 overlapped with syntenic mouse lncRNAs, of which five were differentially expressed in both species. Among these syntenic lncRNA was IL7-AS (antisense, which was induced in multiple cell types and shown to regulate the production of the pro-inflammatory mediator interleukin-6 in both human and mouse cells. In summary, we have identified and characterized those lncRNAs that are differentially expressed following activation of the human and mouse innate immune responses and believe that these catalogs will provide the foundation for the future analysis of the role of lncRNAs in immune and inflammatory responses.

  16. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  17. Development of an Auto-Validation Program for MARS Code Assessments

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2006-01-01

    MARS (Multi-dimensional Analysis of Reactor Safety) code is a best-estimate thermal hydraulic system analysis code developed at KAERI. It is important for a thermal hydraulic computer code to be assessed against theoretical and experimental data to verify and validate the performance and the integrity of the structure, models and correlations of the code. The code assessment efforts for complex thermal hydraulics code such as MARS code can be tedious, time-consuming and require large amount of human intervention in data transfer to see the results in graphic forms. Code developers produce many versions of a code during development and each version need to be verified for integrity. Thus, for MARS code developers, it is desirable to have an automatic way of carrying out the code assessment calculations. In the present work, an Auto-Validation program that carries out the code assessment efforts has been developed. The program uses the user supplied configuration file (with '.vv' extension) which contain commands to read input file, to execute the user selected MARS program, and to generate result graphs. The program can be useful if a same set of code assessments is repeated with different versions of the code. The program is written with the Delphi program language. The program runs under the Microsoft Windows environment

  18. Decoding the non-coding RNAs in Alzheimer's disease.

    Science.gov (United States)

    Schonrock, Nicole; Götz, Jürgen

    2012-11-01

    Non-coding RNAs (ncRNAs) are integral components of biological networks with fundamental roles in regulating gene expression. They can integrate sequence information from the DNA code, epigenetic regulation and functions of multimeric protein complexes to potentially determine the epigenetic status and transcriptional network in any given cell. Humans potentially contain more ncRNAs than any other species, especially in the brain, where they may well play a significant role in human development and cognitive ability. This review discusses their emerging role in Alzheimer's disease (AD), a human pathological condition characterized by the progressive impairment of cognitive functions. We discuss the complexity of the ncRNA world and how this is reflected in the regulation of the amyloid precursor protein and Tau, two proteins with central functions in AD. By understanding this intricate regulatory network, there is hope for a better understanding of disease mechanisms and ultimately developing diagnostic and therapeutic tools.

  19. Systematically profiling and annotating long intergenic non-coding RNAs in human embryonic stem cell.

    Science.gov (United States)

    Tang, Xing; Hou, Mei; Ding, Yang; Li, Zhaohui; Ren, Lichen; Gao, Ge

    2013-01-01

    While more and more long intergenic non-coding RNAs (lincRNAs) were identified to take important roles in both maintaining pluripotency and regulating differentiation, how these lincRNAs may define and drive cell fate decisions on a global scale are still mostly elusive. Systematical profiling and comprehensive annotation of embryonic stem cells lincRNAs may not only bring a clearer big picture of these novel regulators but also shed light on their functionalities. Based on multiple RNA-Seq datasets, we systematically identified 300 human embryonic stem cell lincRNAs (hES lincRNAs). Of which, one forth (78 out of 300) hES lincRNAs were further identified to be biasedly expressed in human ES cells. Functional analysis showed that they were preferentially involved in several early-development related biological processes. Comparative genomics analysis further suggested that around half of the identified hES lincRNAs were conserved in mouse. To facilitate further investigation of these hES lincRNAs, we constructed an online portal for biologists to access all their sequences and annotations interactively. In addition to navigation through a genome browse interface, users can also locate lincRNAs through an advanced query interface based on both keywords and expression profiles, and analyze results through multiple tools. By integrating multiple RNA-Seq datasets, we systematically characterized and annotated 300 hES lincRNAs. A full functional web portal is available freely at http://scbrowse.cbi.pku.edu.cn. As the first global profiling and annotating of human embryonic stem cell lincRNAs, this work aims to provide a valuable resource for both experimental biologists and bioinformaticians.

  20. Practical moral codes in the transgenic organism debate.

    Science.gov (United States)

    Cooley, D R; Goreham, Gary; Youngs, George A

    2004-01-01

    In one study funded by the United States Department of Agriculture, people from North Dakota were interviewed to discover which moral principles they use in evaluating the morality of transgenic organisms and their introduction into markets. It was found that although the moral codes the human subjects employed were very similar, their views on transgenics were vastly different. In this paper, the codes that were used by the respondents are developed, compared to that of the academically composed Belmont Report, and then modified to create the more practical Common Moral Code. At the end, it is shown that the Common Moral Code has inherent inconsistency flaws that might be resolvable, but would require extensive work on the definition of terms and principles. However, the effort is worthwhile, especially if it results in a common moral code that all those involved in the debate are willing to use in negotiating a resolution to their differences.

  1. PP: A graphics post-processor for the EQ6 reaction path code

    International Nuclear Information System (INIS)

    Stockman, H.W.

    1994-09-01

    The PP code is a graphics post-processor and plotting program for EQ6, a popular reaction-path code. PP runs on personal computers, allocates memory dynamically, and can handle very large reaction path runs. Plots of simple variable groups, such as fluid and solid phase composition, can be obtained with as few as two keystrokes. Navigation through the list of reaction path variables is simple and efficient. Graphics files can be exported for inclusion in word processing documents and spreadsheets, and experimental data may be imported and superposed on the reaction path runs. The EQ6 thermodynamic database can be searched from within PP, to simplify interpretation of complex plots

  2. Interpreting Impoliteness: Interpreters’ Voices

    Directory of Open Access Journals (Sweden)

    Tatjana Radanović Felberg

    2017-11-01

    Full Text Available Interpreters in the public sector in Norway interpret in a variety of institutional encounters, and the interpreters evaluate the majority of these encounters as polite. However, some encounters are evaluated as impolite, and they pose challenges when it comes to interpreting impoliteness. This issue raises the question of whether interpreters should take a stance on their own evaluation of impoliteness and whether they should interfere in communication. In order to find out more about how interpreters cope with this challenge, in 2014 a survey was sent to all interpreters registered in the Norwegian Register of Interpreters. The survey data were analyzed within the theoretical framework of impoliteness theory using the notion of moral order as an explanatory tool in a close reading of interpreters’ answers. The analysis shows that interpreters reported using a variety of strategies for interpreting impoliteness, including omissions and downtoning. However, the interpreters also gave examples of individual strategies for coping with impoliteness, such as interrupting and postponing interpreting. These strategies border behavioral strategies and conflict with the Norwegian ethical guidelines for interpreting. In light of the ethical guidelines and actual practice, mapping and discussing different strategies used by interpreters might heighten interpreters’ and interpreter-users’ awareness of the role impoliteness can play in institutional interpreter– mediated encounters. 

  3. Penultimate interpretation.

    Science.gov (United States)

    Neuman, Yair

    2010-10-01

    Interpretation is at the center of psychoanalytic activity. However, interpretation is always challenged by that which is beyond our grasp, the 'dark matter' of our mind, what Bion describes as ' O'. O is one of the most central and difficult concepts in Bion's thought. In this paper, I explain the enigmatic nature of O as a high-dimensional mental space and point to the price one should pay for substituting the pre-symbolic lexicon of the emotion-laden and high-dimensional unconscious for a low-dimensional symbolic representation. This price is reification--objectifying lived experience and draining it of vitality and complexity. In order to address the difficulty of approaching O through symbolization, I introduce the term 'Penultimate Interpretation'--a form of interpretation that seeks 'loopholes' through which the analyst and the analysand may reciprocally save themselves from the curse of reification. Three guidelines for 'Penultimate Interpretation' are proposed and illustrated through an imaginary dialogue. Copyright © 2010 Institute of Psychoanalysis.

  4. Code of conduct for scientists (abstract)

    International Nuclear Information System (INIS)

    Khurshid, S.J.

    2011-01-01

    The emergence of advanced technologies in the last three decades and extraordinary progress in our knowledge on the basic Physical, Chemical and Biological properties of living matter has offered tremendous benefits to human beings but simultaneously highlighted the need of higher awareness and responsibility by the scientists of 21 century. Scientist is not born with ethics, nor science is ethically neutral, but there are ethical dimensions to scientific work. There is need to evolve an appropriate Code of Conduct for scientist particularly working in every field of Science. However, while considering the contents, promulgation and adaptation of Codes of Conduct for Scientists, a balance is needed to be maintained between freedom of scientists and at the same time some binding on them in the form of Code of Conducts. The use of good and safe laboratory procedures, whether, codified by law or by common practice must also be considered as part of the moral duties of scientists. It is internationally agreed that a general Code of Conduct can't be formulated for all the scientists universally, but there should be a set of 'building blocks' aimed at establishing the Code of Conduct for Scientists either as individual researcher or responsible for direction, evaluation, monitoring of scientific activities at the institutional or organizational level. (author)

  5. Digital color acquisition, perception, coding and rendering

    CERN Document Server

    Fernandez-Maloigne, Christine; Macaire, Ludovic

    2013-01-01

    In this book the authors identify the basic concepts and recent advances in the acquisition, perception, coding and rendering of color. The fundamental aspects related to the science of colorimetry in relation to physiology (the human visual system) are addressed, as are constancy and color appearance. It also addresses the more technical aspects related to sensors and the color management screen. Particular attention is paid to the notion of color rendering in computer graphics. Beyond color, the authors also look at coding, compression, protection and quality of color images and videos.

  6. Reactivity worth measurements on the CALIBAN reactor: interpretation of integral experiments for the nuclear data validation

    International Nuclear Information System (INIS)

    Richard, B.

    2012-01-01

    The good knowledge of nuclear data, input parameters for the neutron transport calculation codes, is necessary to support the advances of the nuclear industry. The purpose of this work is to bring pertinent information regarding the nuclear data integral validation process. Reactivity worth measurements have been performed on the Caliban reactor, they concern four materials of interest for the nuclear industry: gold, lutetium, plutonium and uranium 238. Experiments which have been conducted in order to improve the characterization of the core are also described and discussed, the latter are necessary to the good interpretation of reactivity worth measurements. The experimental procedures are described with their associated uncertainties, measurements are then compared to numerical results. The methods used in numerical calculations are reported, especially the multigroup cross sections generation for deterministic codes. The modeling of the experiments is presented along with the associated uncertainties. This comparison led to an interpretation concerning the qualification of nuclear data libraries. Discrepancies are reported, discussed and justify the need of such experiments. (author) [fr

  7. Coding for effective denial management.

    Science.gov (United States)

    Miller, Jackie; Lineberry, Joe

    2004-01-01

    ABNs and the compliance risks associated with improper use. Finally, training programs should include routine audits to monitor coders for competence and precision. Constantly changing codes and guidelines mean that a coder's skills can quickly become obsolete if not reinforced by ongoing training and monitoring. Comprehensive reporting and routine analysis of claim denials is without a doubt one of the greatest assets to a practice that is suffering from excessive claim denials and should be considered an investment capable of providing both short and long term ROIs. Some radiologists may lack the funding or human resources needed to implement truly effective coding programs for their staff members. In these circumstances, radiology business managers should consider outsourcing their coding.

  8. Lnc2Meth: a manually curated database of regulatory relationships between long non-coding RNAs and DNA methylation associated with human disease.

    Science.gov (United States)

    Zhi, Hui; Li, Xin; Wang, Peng; Gao, Yue; Gao, Baoqing; Zhou, Dianshuang; Zhang, Yan; Guo, Maoni; Yue, Ming; Shen, Weitao; Ning, Shangwei; Jin, Lianhong; Li, Xia

    2018-01-04

    Lnc2Meth (http://www.bio-bigdata.com/Lnc2Meth/), an interactive resource to identify regulatory relationships between human long non-coding RNAs (lncRNAs) and DNA methylation, is not only a manually curated collection and annotation of experimentally supported lncRNAs-DNA methylation associations but also a platform that effectively integrates tools for calculating and identifying the differentially methylated lncRNAs and protein-coding genes (PCGs) in diverse human diseases. The resource provides: (i) advanced search possibilities, e.g. retrieval of the database by searching the lncRNA symbol of interest, DNA methylation patterns, regulatory mechanisms and disease types; (ii) abundant computationally calculated DNA methylation array profiles for the lncRNAs and PCGs; (iii) the prognostic values for each hit transcript calculated from the patients clinical data; (iv) a genome browser to display the DNA methylation landscape of the lncRNA transcripts for a specific type of disease; (v) tools to re-annotate probes to lncRNA loci and identify the differential methylation patterns for lncRNAs and PCGs with user-supplied external datasets; (vi) an R package (LncDM) to complete the differentially methylated lncRNAs identification and visualization with local computers. Lnc2Meth provides a timely and valuable resource that can be applied to significantly expand our understanding of the regulatory relationships between lncRNAs and DNA methylation in various human diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  10. Measurement of Androgen and Estrogen Concentrations in Cord Blood: Accuracy, Biological Interpretation and Applications to Understanding Human Behavioural Development

    Directory of Open Access Journals (Sweden)

    Lauren P Hollier

    2014-05-01

    Full Text Available Accurately measuring hormone exposure during prenatal life presents a methodological challenge and there is currently no ‘gold standard’ approach. Ideally, circulating fetal hormone levels would be measured at repeated time points during pregnancy. However, it is not currently possible to obtain fetal blood samples without significant risk to the fetus, and therefore surrogate markers of fetal hormone levels must be utilized. Umbilical cord blood can be readily obtained at birth and largely reflects fetal circulation in late gestation. This review examines the accuracy and biological interpretation of the measurement of androgens and estrogens in cord blood. The use of cord blood hormones to understand and investigate human development is then discussed.

  11. A Human Long Non-Coding RNA ALT1 Controls the Cell Cycle of Vascular Endothelial Cells Via ACE2 and Cyclin D1 Pathway

    Directory of Open Access Journals (Sweden)

    Wen Li

    2017-10-01

    Full Text Available Background/Aims: ALT1 is a novel long non-coding RNA derived from the alternatively spliced transcript of the deleted in lymphocytic leukemia 2 (DLEU2. To date, ALT1 biological roles in human vascular endothelial cells have not been reported. Methods: ALT1 was knocked down by siRNAs. Cell proliferation was analyzed by cck-8. The existence and sequence of human ALT1 were identified by 3’ rapid amplification of cDNA ends. The interaction between lncRNA and proteins was analyzed by RNA-Protein pull down assay, RNA immunoprecipitation, and mass spectrometry analysis. Results: ALT1 was expressed in human umbilical vein endothelial cells (HUVECs. The expression of ALT1 was significantly downregulated in contact-inhibited HUVECs and in hypoxia-induced, growth-arrested HUVECs. Knocking down of ALT1 inhibited the proliferation of HUVECs by G0/G1 cell cycle arrest. We observed that angiotensin converting enzyme Ⅱ(ACE2 was a direct target gene of ALT1. Knocking-down of ALT1 or its target gene ACE2 could efficiently decrease the expression of cyclin D1 via the enhanced ubiquitination and degradation, in which HIF-1α and protein von Hippel-Lindau (pVHL might be involved. Conclusion: The results suggested the human long non-coding RNA ALT1 is a novel regulator for cell cycle of HUVECs via ACE2 and cyclin D1 pathway.

  12. An interactive web application for the dissemination of human systems immunology data.

    Science.gov (United States)

    Speake, Cate; Presnell, Scott; Domico, Kelly; Zeitner, Brad; Bjork, Anna; Anderson, David; Mason, Michael J; Whalen, Elizabeth; Vargas, Olivia; Popov, Dimitry; Rinchai, Darawan; Jourde-Chiche, Noemie; Chiche, Laurent; Quinn, Charlie; Chaussabel, Damien

    2015-06-19

    Systems immunology approaches have proven invaluable in translational research settings. The current rate at which large-scale datasets are generated presents unique challenges and opportunities. Mining aggregates of these datasets could accelerate the pace of discovery, but new solutions are needed to integrate the heterogeneous data types with the contextual information that is necessary for interpretation. In addition, enabling tools and technologies facilitating investigators' interaction with large-scale datasets must be developed in order to promote insight and foster knowledge discovery. State of the art application programming was employed to develop an interactive web application for browsing and visualizing large and complex datasets. A collection of human immune transcriptome datasets were loaded alongside contextual information about the samples. We provide a resource enabling interactive query and navigation of transcriptome datasets relevant to human immunology research. Detailed information about studies and samples are displayed dynamically; if desired the associated data can be downloaded. Custom interactive visualizations of the data can be shared via email or social media. This application can be used to browse context-rich systems-scale data within and across systems immunology studies. This resource is publicly available online at [Gene Expression Browser Landing Page ( https://gxb.benaroyaresearch.org/dm3/landing.gsp )]. The source code is also available openly [Gene Expression Browser Source Code ( https://github.com/BenaroyaResearch/gxbrowser )]. We have developed a data browsing and visualization application capable of navigating increasingly large and complex datasets generated in the context of immunological studies. This intuitive tool ensures that, whether taken individually or as a whole, such datasets generated at great effort and expense remain interpretable and a ready source of insight for years to come.

  13. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  14. Examination of the Accuracy of Coding Hospital-Acquired...

    Data.gov (United States)

    U.S. Department of Health & Human Services — A new study, Examination of the Accuracy of Coding Hospital-Acquired Pressure Ulcer Stages, published in Volume 4, Issue 1 of the Medicare and Medicaid Research...

  15. [The point-digital interpretation and the choice of the dermatoglyphic patterns on human fingers for diagnostics of consanguineous relationship].

    Science.gov (United States)

    Zvyagin, V N; Rakitin, V A; Fomina, E E

    The objective of the present study was the development of the point-digital model for the scaless interpretation of the dermatoglyphic papillary patterns on human fingers that would allow to comprehensively describe, in digital terms, the main characteristics of the traits and perform the quantitative assessment of the frequency of their inheritance. A specially developed computer program, D.glyphic. 7-14 was used to mark the dermatoglyphic patterns on the fingerprints obtained from 30 familial triplets (father + mother + child).The values of all the studied traits for kinship diagnostics were found by calculating the ratios of the sums of differences between the traits in the parent-parent pairs to those in the respective parent-child pairs. The algorithms for the point marking of the traits and reading out the digital information about them have been developed. The traditional dermatoglyphic patterns were selected and the novel ones applied for the use in the framework of the point-digital model for the interpretation of the for diagnostics of consanguineous relationship. The present experimental study has demonstrated the high level of inheritance of the selected traits and the possibility to develop the algorithms and computation techniques for the calculation of consanguineous relationship coefficients based on these traits.

  16. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  17. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  18. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  19. Hans-Georg Gadamer’s philosophical hermeneutics: Concepts of reading, understanding and interpretation

    OpenAIRE

    Paul Regan

    2012-01-01

    Hans-Georg Gadamer’s philosophical hermeneutics is a popular qualitative research interpretive method aiming to explore the meaning of individual experiences in relation to understanding human interpretation. Gadamer identifies that authentic engagement with reading requires awareness of the inter-subjective nature of understanding in order to promote a reflective engagement with the text. The main concepts of Gadamer’s view of reading and understanding are explored in this paper in relation ...

  20. A methodology for extracting the electrical properties of human skin

    International Nuclear Information System (INIS)

    Birgersson, Ulrik; Nicander, Ingrid; Ollmar, Stig; Birgersson, Erik

    2013-01-01

    A methodology to determine dielectrical properties of human skin is presented and analyzed. In short, it is based on a mathematical model that considers the local transport of charge in the various layers of the skin, which is coupled with impedance measurements of both stripped and intact skin, an automated code generator, and an optimization algorithm. New resistivity and permittivity values for the stratum corneum soaked with physiological saline solution for 1 min and the viable skin beneath are obtained and expressed as easily accessible functions. The methodology can be extended to account for different electrode designs as well as more physical phenomena that are relevant to electrical impedance measurements of skin and their interpretation. (paper)

  1. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  2. Short sequence motifs, overrepresented in mammalian conservednon-coding sequences

    Energy Technology Data Exchange (ETDEWEB)

    Minovitsky, Simon; Stegmaier, Philip; Kel, Alexander; Kondrashov,Alexey S.; Dubchak, Inna

    2007-02-21

    Background: A substantial fraction of non-coding DNAsequences of multicellular eukaryotes is under selective constraint. Inparticular, ~;5 percent of the human genome consists of conservednon-coding sequences (CNSs). CNSs differ from other genomic sequences intheir nucleotide composition and must play important functional roles,which mostly remain obscure.Results: We investigated relative abundancesof short sequence motifs in all human CNSs present in the human/mousewhole-genome alignments vs. three background sets of sequences: (i)weakly conserved or unconserved non-coding sequences (non-CNSs); (ii)near-promoter sequences (located between nucleotides -500 and -1500,relative to a start of transcription); and (iii) random sequences withthe same nucleotide composition as that of CNSs. When compared tonon-CNSs and near-promoter sequences, CNSs possess an excess of AT-richmotifs, often containing runs of identical nucleotides. In contrast, whencompared to random sequences, CNSs contain an excess of GC-rich motifswhich, however, lack CpG dinucleotides. Thus, abundance of short sequencemotifs in human CNSs, taken as a whole, is mostly determined by theiroverall compositional properties and not by overrepresentation of anyspecific short motifs. These properties are: (i) high AT-content of CNSs,(ii) a tendency, probably due to context-dependent mutation, of A's andT's to clump, (iii) presence of short GC-rich regions, and (iv) avoidanceof CpG contexts, due to their hypermutability. Only a small number ofshort motifs, overrepresented in all human CNSs are similar to bindingsites of transcription factors from the FOX family.Conclusion: Human CNSsas a whole appear to be too broad a class of sequences to possess strongfootprints of any short sequence-specific functions. Such footprintsshould be studied at the level of functional subclasses of CNSs, such asthose which flank genes with a particular pattern of expression. Overallproperties of CNSs are affected by

  3. 21 CFR 201.2 - Drugs and devices; National Drug Code numbers.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Drugs and devices; National Drug Code numbers. 201.2 Section 201.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL LABELING General Labeling Provisions § 201.2 Drugs and devices; National Drug Code...

  4. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  5. Rare and low-frequency coding variants alter human adult height

    NARCIS (Netherlands)

    Marouli, Eirini; Graff, Mariaelisa; Medina-Gomez, Carolina; Lo, Ken Sin; Wood, Andrew R.; Kjaer, Troels R.; Fine, Rebecca S.; Lu, Yingchang; Schurmann, Claudia; Highland, Heather M.; Rüeger, Sina; Thorleifsson, Gudmar; Justice, Anne E.; Lamparter, David; Stirrups, Kathleen E.; Turcot, Valérie; Young, Kristin L.; Winkler, Thomas W.; Esko, Tõnu; Karaderi, Tugce; Locke, Adam E.; Masca, Nicholas G. D.; Ng, Maggie C. Y.; Mudgal, Poorva; Rivas, Manuel A.; Vedantam, Sailaja; Mahajan, Anubha; Guo, Xiuqing; Abecasis, Goncalo; Aben, Katja K.; Adair, Linda S.; Alam, Dewan S.; Albrecht, Eva; Allin, Kristine H.; Allison, Matthew; Amouyel, Philippe; Appel, Emil V.; Arveiler, Dominique; Asselbergs, Folkert W.; Auer, Paul L.; Balkau, Beverley; Banas, Bernhard; Bang, Lia E.; Benn, Marianne; Bergmann, Sven; Bielak, Lawrence F.; Blüher, Matthias; Boeing, Heiner; Boerwinkle, Eric; Böger, Carsten A.; Bonnycastle, Lori L.; Bork-Jensen, Jette; Bots, Michiel L.; Bottinger, Erwin P.; Bowden, Donald W.; Brandslund, Ivan; Breen, Gerome; Brilliant, Murray H.; Broer, Linda; Burt, Amber A.; Butterworth, Adam S.; Carey, David J.; Caulfield, Mark J.; Chambers, John C.; Chasman, Daniel I.; Chen, Yii-Der Ida; Chowdhury, Rajiv; Christensen, Cramer; Chu, Audrey Y.; Cocca, Massimiliano; Collins, Francis S.; Cook, James P.; Corley, Janie; Galbany, Jordi Corominas; Cox, Amanda J.; Cuellar-Partida, Gabriel; Danesh, John; Davies, Gail; de Bakker, Paul I. W.; de Borst, Gert J.; de Denus, Simon; de Groot, Mark C. H.; de Mutsert, Renée; Deary, Ian J.; Dedoussis, George; Demerath, Ellen W.; den Hollander, Anneke I.; Dennis, Joe G.; Di Angelantonio, Emanuele; Drenos, Fotios; Du, Mengmeng; Dunning, Alison M.; Easton, Douglas F.; Ebeling, Tapani; Edwards, Todd L.; Ellinor, Patrick T.; Elliott, Paul; Evangelou, Evangelos; Farmaki, Aliki-Eleni; Faul, Jessica D.; Feitosa, Mary F.; Feng, Shuang; Ferrannini, Ele; Ferrario, Marco M.; Ferrieres, Jean; Florez, Jose C.; Ford, Ian; Fornage, Myriam; Franks, Paul W.; Frikke-Schmidt, Ruth; Galesloot, Tessel E.; Gan, Wei; Gandin, Ilaria; Gasparini, Paolo; Giedraitis, Vilmantas; Giri, Ayush; Girotto, Giorgia; Gordon, Scott D.; Gordon-Larsen, Penny; Gorski, Mathias; Grarup, Niels; Grove, Megan L.; Gudnason, Vilmundur; Gustafsson, Stefan; Hansen, Torben; Harris, Kathleen Mullan; Harris, Tamara B.; Hattersley, Andrew T.; Hayward, Caroline; He, Liang; Heid, Iris M.; Heikkilä, Kauko; Helgeland, Øyvind; Hernesniemi, Jussi; Hewitt, Alex W.; Hocking, Lynne J.; Hollensted, Mette; Holmen, Oddgeir L.; Hovingh, G. Kees; Howson, Joanna M. M.; Hoyng, Carel B.; Huang, Paul L.; Hveem, Kristian; Ikram, M. Arfan; Ingelsson, Erik; Jackson, Anne U.; Jansson, Jan-Håkan; Jarvik, Gail P.; Jensen, Gorm B.; Jhun, Min A.; Jia, Yucheng; Jiang, Xuejuan; Johansson, Stefan; Jørgensen, Marit E.; Jørgensen, Torben; Jousilahti, Pekka; Jukema, J. Wouter; Kahali, Bratati; Kahn, René S.; Kähönen, Mika; Kamstrup, Pia R.; Kanoni, Stavroula; Kaprio, Jaakko; Karaleftheri, Maria; Kardia, Sharon L. R.; Karpe, Fredrik; Kee, Frank; Keeman, Renske; Kiemeney, Lambertus A.; Kitajima, Hidetoshi; Kluivers, Kirsten B.; Kocher, Thomas; Komulainen, Pirjo; Kontto, Jukka; Kooner, Jaspal S.; Kooperberg, Charles; Kovacs, Peter; Kriebel, Jennifer; Kuivaniemi, Helena; Küry, Sébastien; Kuusisto, Johanna; La Bianca, Martina; Laakso, Markku; Lakka, Timo A.; Lange, Ethan M.; Lange, Leslie A.; Langefeld, Carl D.; Langenberg, Claudia; Larson, Eric B.; Lee, I.-Te; Lehtimäki, Terho; Lewis, Cora E.; Li, Huaixing; Li, Jin; Li-Gao, Ruifang; Lin, Honghuang; Lin, Li-An; Lin, Xu; Lind, Lars; Lindström, Jaana; Linneberg, Allan; Liu, Yeheng; Liu, Yongmei; Lophatananon, Artitaya; Luan, Jian'an; Lubitz, Steven A.; Lyytikäinen, Leo-Pekka; Mackey, David A.; Madden, Pamela A. F.; Manning, Alisa K.; Männistö, Satu; Marenne, Gaëlle; Marten, Jonathan; Martin, Nicholas G.; Mazul, Angela L.; Meidtner, Karina; Metspalu, Andres; Mitchell, Paul; Mohlke, Karen L.; Mook-Kanamori, Dennis O.; Morgan, Anna; Morris, Andrew D.; Morris, Andrew P.; Müller-Nurasyid, Martina; Munroe, Patricia B.; Nalls, Mike A.; Nauck, Matthias; Nelson, Christopher P.; Neville, Matt; Nielsen, Sune F.; Nikus, Kjell; Njølstad, Pål R.; Nordestgaard, Børge G.; Ntalla, Ioanna; O'Connel, Jeffrey R.; Oksa, Heikki; Loohuis, Loes M. Olde; Ophoff, Roel A.; Owen, Katharine R.; Packard, Chris J.; Padmanabhan, Sandosh; Palmer, Colin N. A.; Pasterkamp, Gerard; Patel, Aniruddh P.; Pattie, Alison; Pedersen, Oluf; Peissig, Peggy L.; Peloso, Gina M.; Pennell, Craig E.; Perola, Markus; Perry, James A.; Perry, John R. B.; Person, Thomas N.; Pirie, Ailith; Polasek, Ozren; Posthuma, Danielle; Raitakari, Olli T.; Rasheed, Asif; Rauramaa, Rainer; Reilly, Dermot F.; Reiner, Alex P.; Renström, Frida; Ridker, Paul M.; Rioux, John D.; Robertson, Neil; Robino, Antonietta; Rolandsson, Olov; Rudan, Igor; Ruth, Katherine S.; Saleheen, Danish; Salomaa, Veikko; Samani, Nilesh J.; Sandow, Kevin; Sapkota, Yadav; Sattar, Naveed; Schmidt, Marjanka K.; Schreiner, Pamela J.; Schulze, Matthias B.; Scott, Robert A.; Segura-Lepe, Marcelo P.; Shah, Svati; Sim, Xueling; Sivapalaratnam, Suthesh; Small, Kerrin S.; Smith, Albert Vernon; Smith, Jennifer A.; Southam, Lorraine; Spector, Timothy D.; Speliotes, Elizabeth K.; Starr, John M.; Steinthorsdottir, Valgerdur; Stringham, Heather M.; Stumvoll, Michael; Surendran, Praveen; 't Hart, Leen M.; Tansey, Katherine E.; Tardif, Jean-Claude; Taylor, Kent D.; Teumer, Alexander; Thompson, Deborah J.; Thorsteinsdottir, Unnur; Thuesen, Betina H.; Tönjes, Anke; Tromp, Gerard; Trompet, Stella; Tsafantakis, Emmanouil; Tuomilehto, Jaakko; Tybjaerg-Hansen, Anne; Tyrer, Jonathan P.; Uher, Rudolf; Uitterlinden, André G.; Ulivi, Sheila; van der Laan, Sander W.; van der Leij, Andries R.; van Duijn, Cornelia M.; van Schoor, Natasja M.; van Setten, Jessica; Varbo, Anette; Varga, Tibor V.; Varma, Rohit; Edwards, Digna R. Velez; Vermeulen, Sita H.; Vestergaard, Henrik; Vitart, Veronique; Vogt, Thomas F.; Vozzi, Diego; Walker, Mark; Wang, Feijie; Wang, Carol A.; Wang, Shuai; Wang, Yiqin; Wareham, Nicholas J.; Warren, Helen R.; Wessel, Jennifer; Willems, Sara M.; Wilson, James G.; Witte, Daniel R.; Woods, Michael O.; Wu, Ying; Yaghootkar, Hanieh; Yao, Jie; Yao, Pang; Yerges-Armstrong, Laura M.; Young, Robin; Zeggini, Eleftheria; Zhan, Xiaowei; Zhang, Weihua; Zhao, Jing Hua; Zhao, Wei; Zheng, He; Zhou, Wei; Rotter, Jerome I.; Boehnke, Michael; Kathiresan, Sekar; McCarthy, Mark I.; Willer, Cristen J.; Stefansson, Kari; Borecki, Ingrid B.; Liu, Dajiang J.; North, Kari E.; Heard-Costa, Nancy L.; Pers, Tune H.; Lindgren, Cecilia M.; Oxvig, Claus; Kutalik, Zoltán; Rivadeneira, Fernando; Loos, Ruth J. F.; Frayling, Timothy M.; Hirschhorn, Joel N.; Deloukas, Panos; Lettre, Guillaume

    2017-01-01

    Height is a highly heritable, classic polygenic trait with approximately 700 common associated variants identified through genome-wide association studies so far. Here, we report 83 height-associated coding variants with lower minor-allele frequencies (in the range of 0.1-4.8%) and effects of up to

  6. On court interpreters' visibility

    DEFF Research Database (Denmark)

    Dubslaff, Friedel; Martinsen, Bodil

    of the service they receive. Ultimately, the findings will be used for training purposes. Future - and, for that matter, already practising - interpreters as well as the professional users of interpreters ought to take the reality of the interpreters' work in practice into account when assessing the quality...... on the interpreter's interpersonal role and, in particular, on signs of the interpreter's visibility, i.e. active co-participation. At first sight, the interpreting assignment in question seems to be a short and simple routine task which would not require the interpreter to deviate from the traditional picture...

  7. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  8. RNA-Seq of human neurons derived from iPS cells reveals candidate long non-coding RNAs involved in neurogenesis and neuropsychiatric disorders.

    Directory of Open Access Journals (Sweden)

    Mingyan Lin

    Full Text Available Genome-wide expression analysis using next generation sequencing (RNA-Seq provides an opportunity for in-depth molecular profiling of fundamental biological processes, such as cellular differentiation and malignant transformation. Differentiating human neurons derived from induced pluripotent stem cells (iPSCs provide an ideal system for RNA-Seq since defective neurogenesis caused by abnormalities in transcription factors, DNA methylation, and chromatin modifiers lie at the heart of some neuropsychiatric disorders. As a preliminary step towards applying next generation sequencing using neurons derived from patient-specific iPSCs, we have carried out an RNA-Seq analysis on control human neurons. Dramatic changes in the expression of coding genes, long non-coding RNAs (lncRNAs, pseudogenes, and splice isoforms were seen during the transition from pluripotent stem cells to early differentiating neurons. A number of genes that undergo radical changes in expression during this transition include candidates for schizophrenia (SZ, bipolar disorder (BD and autism spectrum disorders (ASD that function as transcription factors and chromatin modifiers, such as POU3F2 and ZNF804A, and genes coding for cell adhesion proteins implicated in these conditions including NRXN1 and NLGN1. In addition, a number of novel lncRNAs were found to undergo dramatic changes in expression, one of which is HOTAIRM1, a regulator of several HOXA genes during myelopoiesis. The increase we observed in differentiating neurons suggests a role in neurogenesis as well. Finally, several lncRNAs that map near SNPs associated with SZ in genome wide association studies also increase during neuronal differentiation, suggesting that these novel transcripts may be abnormally regulated in a subgroup of patients.

  9. Do Interpreters Indeed Have Superior Working Memory in Interpreting

    Institute of Scientific and Technical Information of China (English)

    于飞

    2012-01-01

    With the frequent communications between China and western countries in the field of economy,politics and culture,etc,Inter preting becomes more and more important to people in all walks of life.This paper aims to testify the author’s hypothesis "professional interpreters have similar short-term memory with unprofessional interpreters,but they have superior working memory." After the illustration of literatures concerning with consecutive interpreting,short-term memory and working memory,experiments are designed and analysis are described.

  10. Visual sexual stimuli – cue or reward? A key for interpreting brain imaging studies on human sexual behaviors

    Directory of Open Access Journals (Sweden)

    Mateusz Gola

    2016-08-01

    Full Text Available There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS for human sexuality studies, including emerging field of research on compulsive sexual behaviors. A central question in this field is whether behaviors such as extensive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned (cue and unconditioned (reward stimuli (related to reward anticipation vs reward consumption, respectively. Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as cues (conditioned stimuli or rewards (unconditioned stimuli. Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward (unconditioned stimuli, as evidenced by: 1. experience of pleasure while watching VSS, possibly accompanied by genital reaction 2. reward-related brain activity correlated with these pleasurable feelings in response to VSS, 3. a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money, and/or 4. conditioning for cues (CS predictive for. We hope that this perspective paper will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  11. Expression of a novel non-coding mitochondrial RNA in human proliferating cells.

    Science.gov (United States)

    Villegas, Jaime; Burzio, Veronica; Villota, Claudio; Landerer, Eduardo; Martinez, Ronny; Santander, Marcela; Martinez, Rodrigo; Pinto, Rodrigo; Vera, María I; Boccardo, Enrique; Villa, Luisa L; Burzio, Luis O

    2007-01-01

    Previously, we reported the presence in mouse cells of a mitochondrial RNA which contains an inverted repeat (IR) of 121 nucleotides (nt) covalently linked to the 5' end of the mitochondrial 16S RNA (16S mtrRNA). Here, we report the structure of an equivalent transcript of 2374 nt which is over-expressed in human proliferating cells but not in resting cells. The transcript contains a hairpin structure comprising an IR of 815 nt linked to the 5' end of the 16S mtrRNA and forming a long double-stranded structure or stem and a loop of 40 nt. The stem is resistant to RNase A and can be detected and isolated after digestion with the enzyme. This novel transcript is a non-coding RNA (ncRNA) and several evidences suggest that the transcript is synthesized in mitochondria. The expression of this transcript can be induced in resting lymphocytes stimulated with phytohaemagglutinin (PHA). Moreover, aphidicolin treatment of DU145 cells reversibly blocks proliferation and expression of the transcript. If the drug is removed, the cells re-assume proliferation and over-express the ncmtRNA. These results suggest that the expression of the ncmtRNA correlates with the replicative state of the cell and it may play a role in cell proliferation.

  12. Medical students' cognitive load in volumetric image interpretation : Insights from human-computer interaction and eye movements

    NARCIS (Netherlands)

    Stuijfzand, Bobby G.; Van Der Schaaf, Marieke F.; Kirschner, Femke C.; Ravesloot, Cécile J.; Van Der Gijp, Anouk; Vincken, Koen L.

    2016-01-01

    Medical image interpretation is moving from using 2D- to volumetric images, thereby changing the cognitive and perceptual processes involved. This is expected to affect medical students' experienced cognitive load, while learning image interpretation skills. With two studies this explorative

  13. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  14. RETHINKING RESIDENTIAL MOBILITY: AN INTERDISCIPLINARY INTERPRETATION

    Directory of Open Access Journals (Sweden)

    Roderick J. Lawrence

    2008-03-01

    Full Text Available Since the 1950s academics and professionals have proposed a number of disciplinary and sector based interpretations of why, when and where households move or choose to stay in the same housing unit at different periods of the life cycle and especially the family cycle. This article challenges studies that only analyse one set of factors. The article stems from a synthesis of 20 years of research by the author who  has an interdisciplinary training in the broad field of people-environment relations. First, it reviews some key concepts related to human ecology, including housing, culture, identity and cultivation. Then it will consider how these concepts can be applied to interpret residential mobility using an interdisciplinary approach. An empirical case study of residential mobility in Geneva, Switzerland is presented in order to show how this approach can help improve our understanding of the motives people have regarding the wish to stay in their residence or to move elsewhere.

  15. [Towards a new Tunisian Medical Code of Deontology].

    Science.gov (United States)

    Aissaoui, Abir; Haj Salem, Nidhal; Chadly, Ali

    2010-06-01

    The Medical Code of Deontology is a legal text including the physician's duties towards his patients, colleagues, auxiliaries and the community. Considering the scientific, legal and social changes, the deontology code should be revised periodically. The first Tunisian Medical Code of Deontology (TMCD) was promulgated in 1973 and abrogated in 1993 by the new Code. This version has never been reviewed and does not seem to fit the current conditions of medical practice. The TMCD does not contain texts referring to information given to the patient, pain control, palliative care and management of the end of life as well as protection of medical data. Furthermore, the TMCD does not include rules related to tissues and organs transplantation and medical assisted human reproduction in accordance with Tunisian legal texts. We aim in this paper at analyzing the insufficiencies of the TMCD and suggesting modifications in order to update it.

  16. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  17. Molecular analysis of human argininosuccinate lyase: Mutant characterization and alternative splicing of the coding region

    International Nuclear Information System (INIS)

    Walker, D.C.; McCloskey, D.A.; Simard, L.R.; McInnes, R.R.

    1990-01-01

    Argininosuccinic acid lyase (ASAL) deficiency is a clinically heterogeneous autosomal recessive urea cycle disorder. The authors previously established by complementation analysis that 29 ASAL-deficient patients have heterogeneous mutations in a single gene. To prove that the ASAL structural gene is the affected locus, they sequenced polymerase chain reaction-amplified ASAL cDNA of a representative mutant from the single complementation group. Fibroblast strain 944 from a late-onset patient who was the product of a consanguineous mating, had only a single base-pair change in the coding region, a C-283→ T transition at a CpG dinucleotide in exon 3. This substitution converts Arg-95 to Cys (R95C), occurs in a stretch of 13 residues that is identical in yeast and human ASAL, and was present in both of the patient's alleles but not in 14 other mutant or 10 normal alleles. They observed that amplified cDNA from mutant 944 and normal cells (liver, keratinocytes, lymphoblasts, and fibroblasts) contained, in addition to the expected 5' 513-base-pair band, a prominent 318-base-pair ASAL band formed by the splicing of exon 2 from the transcript. The short transcript maintains the ASAL reading frame but removes Lys-51, a residue that may be essential for catalysis, since it binds the argininosuccinate substrate. They conclude (i) that the identification of the R95C mutation in strain 944 demonstrates that virtually all ASAL deficiency results from defects in the ASAL structural gene and (ii) that minor alternative splicing of the coding region occurs at the ASAL locus

  18. Dosimetry and health effects self-teaching curriculum: illustrative problems to supplement the user's manual for the Dosimetry and Health Effects Computer Code

    International Nuclear Information System (INIS)

    Runkle, G.E.; Finley, N.C.

    1983-03-01

    This document contains a series of sample problems for the Dosimetry and Health Effects Computer Code to be used in conjunction with the user's manual (Runkle and Cranwell, 1982) for the code. This code was developed at Sandia National Laboratories for the Risk Methodology for Geologic Disposal of Radioactive Waste program (NRC FIN A-1192). The purpose of this document is to familiarize the user with the code, its capabilities, and its limitations. When the user has finished reading this document, he or she should be able to prepare data input for the Dosimetry and Health Effects code and have some insights into interpretation of the model output

  19. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  20. Genre and Interpretation

    DEFF Research Database (Denmark)

    Auken, Sune

    2015-01-01

    Despite the immensity of genre studies as well as studies in interpretation, our understanding of the relationship between genre and interpretation is sketchy at best. The article attempts to unravel some of intricacies of that relationship through an analysis of the generic interpretation carrie...

  1. Vector and Raster Data Storage Based on Morton Code

    Science.gov (United States)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  2. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?

    Science.gov (United States)

    Heilbron, Micha; Chait, Maria

    2017-08-04

    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. A genetic code alteration is a phenotype diversity generator in the human pathogen Candida albicans.

    Directory of Open Access Journals (Sweden)

    Isabel Miranda

    Full Text Available BACKGROUND: The discovery of genetic code alterations and expansions in both prokaryotes and eukaryotes abolished the hypothesis of a frozen and universal genetic code and exposed unanticipated flexibility in codon and amino acid assignments. It is now clear that codon identity alterations involve sense and non-sense codons and can occur in organisms with complex genomes and proteomes. However, the biological functions, the molecular mechanisms of evolution and the diversity of genetic code alterations remain largely unknown. In various species of the genus Candida, the leucine CUG codon is decoded as serine by a unique serine tRNA that contains a leucine 5'-CAG-3'anticodon (tRNA(CAG(Ser. We are using this codon identity redefinition as a model system to elucidate the evolution of genetic code alterations. METHODOLOGY/PRINCIPAL FINDINGS: We have reconstructed the early stages of the Candida genetic code alteration by engineering tRNAs that partially reverted the identity of serine CUG codons back to their standard leucine meaning. Such genetic code manipulation had profound cellular consequences as it exposed important morphological variation, altered gene expression, re-arranged the karyotype, increased cell-cell adhesion and secretion of hydrolytic enzymes. CONCLUSION/SIGNIFICANCE: Our study provides the first experimental evidence for an important role of genetic code alterations as generators of phenotypic diversity of high selective potential and supports the hypothesis that they speed up evolution of new phenotypes.

  4. ORION: a computer code for evaluating environmental concentrations and dose equivalent to human organs or tissue from airborne radionuclides

    International Nuclear Information System (INIS)

    Shinohara, K.; Nomura, T.; Iwai, M.

    1983-05-01

    The computer code ORION has been developed to evaluate the environmental concentrations and the dose equivalent to human organs or tissue from air-borne radionuclides released from multiple nuclear installations. The modified Gaussian plume model is applied to calculate the dispersion of the radionuclide. Gravitational settling, dry deposition, precipitation scavenging and radioactive decay are considered to be the causes of depletion and deposition on the ground or on vegetation. ORION is written in the FORTRAN IV language and can be run on IBM 360, 370, 303X, 43XX and FACOM M-series computers. 8 references, 6 tables

  5. Engineering Definitional Interpreters

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Ramsay, Norman; Larsen, Bradford

    2013-01-01

    A definitional interpreter should be clear and easy to write, but it may run 4--10 times slower than a well-crafted bytecode interpreter. In a case study focused on implementation choices, we explore ways of making definitional interpreters faster without expending much programming effort. We imp...

  6. The interpretation of forensic biochemical expert test made in human body fluids: scientific - legal analysis in the research on sexual offenses

    International Nuclear Information System (INIS)

    Chaves Carballo, Diana

    2014-01-01

    The contributions of science and technology have covered the whole of human life, and relationships of coexistence are even found in the various disciplines of knowledge through legal forensics. Therefore, it is increasingly imperative that the law enforcement agents are interdisciplinary professionals, with knowledge beyond the legal knowledge to enable them make the most of the scientific knowledge in judicial proceedings. Among the natural sciences applied to right, forensic biochemistry has contributed an extremely relevant test for the investigation of various sexual offenses, much has been so, that the Organismo de Investigacion Judicial of Costa Rica has in its Departamento de Laboratorios de Ciencias Forenses with specialized sections in this discipline. A diversity of skills are performed of presumptive and confirmatory character for the presence of biological fluids, sexually transmitted diseases and identification of DNA by genetic markers. Updated information is given with respect to the correct interpretation of forensic biochemical expertises achievable for identification of semen, blood and human saliva in the investigation of sexual offenses. A scientific and legal language is used allowing the most of this information in the criminal process. The main objective has been to interpret, legal and scientifically, forensic biochemical expert evidence performed in human body fluids during the investigation of sexual offenses. A legal, doctrinal and scientific review is presented with compilation of related jurisprudence and criminology reports analysis of Seccion de Bioquimica of the Departamento de Laboratorios Forenses of the Organismo de Investigacion Juridica issued during the investigation of sexual offenses. Two types of attainable skills have existed for the identification of biological fluids, each with a different binding. In addition, it has been clear, due to the lexicon employed when making a forensic biochemist opinion, that to make a proper

  7. Highly conserved non-coding sequences are associated with vertebrate development.

    Directory of Open Access Journals (Sweden)

    Adam Woolfe

    2005-01-01

    Full Text Available In addition to protein coding sequence, the human genome contains a significant amount of regulatory DNA, the identification of which is proving somewhat recalcitrant to both in silico and functional methods. An approach that has been used with some success is comparative sequence analysis, whereby equivalent genomic regions from different organisms are compared in order to identify both similarities and differences. In general, similarities in sequence between highly divergent organisms imply functional constraint. We have used a whole-genome comparison between humans and the pufferfish, Fugu rubripes, to identify nearly 1,400 highly conserved non-coding sequences. Given the evolutionary divergence between these species, it is likely that these sequences are found in, and furthermore are essential to, all vertebrates. Most, and possibly all, of these sequences are located in and around genes that act as developmental regulators. Some of these sequences are over 90% identical across more than 500 bases, being more highly conserved than coding sequence between these two species. Despite this, we cannot find any similar sequences in invertebrate genomes. In order to begin to functionally test this set of sequences, we have used a rapid in vivo assay system using zebrafish embryos that allows tissue-specific enhancer activity to be identified. Functional data is presented for highly conserved non-coding sequences associated with four unrelated developmental regulators (SOX21, PAX6, HLXB9, and SHH, in order to demonstrate the suitability of this screen to a wide range of genes and expression patterns. Of 25 sequence elements tested around these four genes, 23 show significant enhancer activity in one or more tissues. We have identified a set of non-coding sequences that are highly conserved throughout vertebrates. They are found in clusters across the human genome, principally around genes that are implicated in the regulation of development

  8. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    Science.gov (United States)

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  9. In vitro cytotoxicity of Manville Code 100 glass fibers: Effect of fiber length on human alveolar macrophages

    Directory of Open Access Journals (Sweden)

    Jones William

    2006-03-01

    Full Text Available Abstract Background Synthetic vitreous fibers (SVFs are inorganic noncrystalline materials widely used in residential and industrial settings for insulation, filtration, and reinforcement purposes. SVFs conventionally include three major categories: fibrous glass, rock/slag/stone (mineral wool, and ceramic fibers. Previous in vitro studies from our laboratory demonstrated length-dependent cytotoxic effects of glass fibers on rat alveolar macrophages which were possibly associated with incomplete phagocytosis of fibers ≥ 17 μm in length. The purpose of this study was to examine the influence of fiber length on primary human alveolar macrophages, which are larger in diameter than rat macrophages, using length-classified Manville Code 100 glass fibers (8, 10, 16, and 20 μm. It was hypothesized that complete engulfment of fibers by human alveolar macrophages could decrease fiber cytotoxicity; i.e. shorter fibers that can be completely engulfed might not be as cytotoxic as longer fibers. Human alveolar macrophages, obtained by segmental bronchoalveolar lavage of healthy, non-smoking volunteers, were treated with three different concentrations (determined by fiber number of the sized fibers in vitro. Cytotoxicity was assessed by monitoring cytosolic lactate dehydrogenase release and loss of function as indicated by a decrease in zymosan-stimulated chemiluminescence. Results Microscopic analysis indicated that human alveolar macrophages completely engulfed glass fibers of the 20 μm length. All fiber length fractions tested exhibited equal cytotoxicity on a per fiber basis, i.e. increasing lactate dehydrogenase and decreasing chemiluminescence in the same concentration-dependent fashion. Conclusion The data suggest that due to the larger diameter of human alveolar macrophages, compared to rat alveolar macrophages, complete phagocytosis of longer fibers can occur with the human cells. Neither incomplete phagocytosis nor length-dependent toxicity was

  10. Evaluation of total workstation CT interpretation quality: a single-screen pilot study

    Science.gov (United States)

    Beard, David V.; Perry, John R.; Muller, Keith E.; Misra, Ram B.; Brown, P.; Hemminger, Bradley M.; Johnston, Richard E.; Mauro, J. Matthew; Jaques, P. F.; Schiebler, M.

    1991-07-01

    An interpretation report, generated with an electronic viewbox, is affected by two factors: image quality, which encompasses what can be seen on the display, and computer human interaction (CHI), which accounts for the cognitive load effect of locating, moving, and manipulating images with the workstation controls. While a number of subject experiments have considered image quality, only recently has the affect of CHI on total interpretation quality been measured. This paper presents the results of a pilot study conducted to evaluate the total interpretation quality of the FilmPlane2.2 radiology workstation for patient folders containing single forty-slice CT studies. First, radiologists interpreted cases and dictated reports using FilmPlane2.2. Requisition forms were provided. Film interpretation was provided by the original clinical report and interpretation forms generated from a previous experiment. Second, an evaluator developed a list of findings for each case based on those listed in all the reports for each case and then evaluated each report for its response on each finding. Third, the reports were compared to determine how well they agreed with one another. Interpretation speed and observation data was also gathered.

  11. Progressive changes in non-coding RNA profile in leucocytes with age

    Science.gov (United States)

    Muñoz-Culla, Maider; Irizar, Haritz; Gorostidi, Ana; Alberro, Ainhoa; Osorio-Querejeta, Iñaki; Ruiz-Martínez, Javier; Olascoaga, Javier; de Munain, Adolfo López; Otaegui, David

    2017-01-01

    It has been observed that immune cell deterioration occurs in the elderly, as well as a chronic low-grade inflammation called inflammaging. These cellular changes must be driven by numerous changes in gene expression and in fact, both protein-coding and non-coding RNA expression alterations have been observed in peripheral blood mononuclear cells from elder people. In the present work we have studied the expression of small non-coding RNA (microRNA and small nucleolar RNA -snoRNA-) from healthy individuals from 24 to 79 years old. We have observed that the expression of 69 non-coding RNAs (56 microRNAs and 13 snoRNAs) changes progressively with chronological age. According to our results, the age range from 47 to 54 is critical given that it is the period when the expression trend (increasing or decreasing) of age-related small non-coding RNAs is more pronounced. Furthermore, age-related miRNAs regulate genes that are involved in immune, cell cycle and cancer-related processes, which had already been associated to human aging. Therefore, human aging could be studied as a result of progressive molecular changes, and different age ranges should be analysed to cover the whole aging process. PMID:28448962

  12. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  13. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  14. Non-Coding Transcript Heterogeneity in Mesothelioma: Insights from Asbestos-Exposed Mice.

    Science.gov (United States)

    Felley-Bosco, Emanuela; Rehrauer, Hubert

    2018-04-11

    Mesothelioma is an aggressive, rapidly fatal cancer and a better understanding of its molecular heterogeneity may help with making more efficient therapeutic strategies. Non-coding RNAs represent a larger part of the transcriptome but their contribution to diseases is not fully understood yet. We used recently obtained RNA-seq data from asbestos-exposed mice and performed data mining of publicly available datasets in order to evaluate how non-coding RNA contribute to mesothelioma heterogeneity. Nine non-coding RNAs are specifically elevated in mesothelioma tumors and contribute to human mesothelioma heterogeneity. Because some of them have known oncogenic properties, this study supports the concept of non-coding RNAs as cancer progenitor genes.

  15. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  16. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  17. The prognostic potential and carcinogenesis of long non-coding RNA TUG1 in human cholangiocarcinoma.

    Science.gov (United States)

    Xu, Yi; Leng, Kaiming; Li, Zhenglong; Zhang, Fumin; Zhong, Xiangyu; Kang, Pengcheng; Jiang, Xingming; Cui, Yunfu

    2017-09-12

    Cholangiocarcinoma (CCA) is a fatal disease with increasing worldwide incidence and is characterized by poor prognosis due to its poor response to conventional chemotherapy or radiotherapy. Long non-coding RNAs (lncRNAs) play key roles in multiple human cancers, including CCA. Cancer progression related lncRNA taurine-up-regulated gene 1 (TUG1) was reported to be involved in human carcinomas. However, the impact of TUG1 in CCA is unclear. The aim of this study was to explore the expression pattern of TUG1 and evaluate its clinical significance as well as prognostic potential in CCA. In addition, the functional roles of TUG1 including cell proliferation, apoptosis, migration, invasion and epithelial-mesenchymal transition (EMT), were evaluated after TUG1 silencing. Our data demonstrated up-regulation of TUG1 in both CCA tissues and cell lines. Moreover, overexpression of TUG1 is linked to tumor size ( p =0.005), TNM stage ( p =0.013), postoperative recurrence ( p =0.036) and overall survival ( p =0.010) of CCA patients. Furthermore, down-regulation of TUG1 following RNA silencing reduced cell growth and increased apoptosis in CCA cells. Additionally, TUG1 suppression inhibited metastasis potential in vitro by reversing EMT. Overall, our results suggest that TUG1 may be a rational CCA-related prognostic factor and therapeutic target.

  18. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  19. POST: a postprocessor computer code for producing three-dimensional movies of two-phase flow in a reactor vessel

    International Nuclear Information System (INIS)

    Taggart, K.A.; Liles, D.R.

    1977-08-01

    The development of the TRAC computer code for analysis of LOCAs in light-water reactors involves the use of a three-dimensional (r-theta-z), two-fluid hydrodynamics model to describe the two-phase flow of steam and water through the reactor vessel. One of the major problems involved in interpreting results from this code is the presentation of three-dimensional flow patterns. The purpose of the report is to present a partial solution to this data display problem. A first version of a code which produces three-dimensional movies of flow in the reactor vessel has been written and debugged. This code (POST) is used as a postprocessor in conjunction with a stand alone three-dimensional two-phase hydrodynamics code (CYLTF) which is a test bed for the three-dimensional algorithms to be used in TRAC

  20. Ethical Principles of Psychologists and Code of Conduct.

    Science.gov (United States)

    American Psychologist, 2002

    2002-01-01

    Describes the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct, focusing on introduction and applicability; preamble; general principles; and ethical standards (resolving ethical issues, competence, human relations, privacy and confidentiality, advertising and other public statements, record keeping and…

  1. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  2. Perception and interpretation of internet information : accessibility, validity, and trust

    NARCIS (Netherlands)

    Bouwhuis, D.G.

    2006-01-01

    The way in which humans deal with physical objects has been formed by extensive interaction and, according to the theory of embodied cognition, has led to conceptualization and interpretation that is grounded in physical interaction of the body with elements in the environment. Digital objects are

  3. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  4. THEMIS-4: a coherent punctual and multigroup cross section library for Monte Carlo and SN codes from ENDF/B4

    International Nuclear Information System (INIS)

    Dejonghe, G.; Gonnord, J.; Monnier, A.; Nimal, J.C.

    1983-05-01

    The THEMIS cross section processing system has been developped to produce punctual data for MONTE CARLO and coherent multigroup data for SN codes from ENDF/B. The THEMIS-4 data base has been generated from ENDF/B4 using the system and can be accessed by the 3-D Monte Carlo system TRIPOLI-2 and by the SN codes ANISN and DOT. An interpretation of ORNL fusion shielding benchmark is presented

  5. The Ubiquity of Humanity and Textuality in Human Experience

    Directory of Open Access Journals (Sweden)

    Daihyun Chung

    2015-11-01

    Full Text Available The so-called “crisis of the humanities” can be understood in terms of an asymmetry between the natural and social sciences on the one hand and the humanities on the other. While the sciences approach topics related to human experience in quantificational or experimental terms, the humanities turn to ancient, canonical, and other texts in the search for truths about human experience. As each approach has its own unique limitations, it is desirable to overcome or remove the asymmetry between them. The present article seeks to do just that by advancing and defending the following two claims: (a that humanity is ubiquitous wherever language is used; and (b that anything that can be experienced by humans is in need of an interpretation. Two arguments are presented in support of these claims. The first argument concerns the nature of questions, which are one of the fundamental marks or manifestations of human language. All questions are ultimately attempts to find meanings or interpretations of what is presented. As such, in questioning phenomena, one seeks to transcend the negative space or oppression of imposed structures; in doing so, one reveals one’s humanity. Second, all phenomena are textual in nature: that which astrophysicists find in distant galaxies or which cognitive neuroscientists find in the structures of the human brain are no less in need of interpretation than the dialogues of Plato or the poems of Homer. Texts are ubiquitous. The implications of these two arguments are identified and discussed in this article. In particular, it is argued that the ubiquity of humanity and textuality points to a view of human nature that is neither individualistic nor collectivist but rather integrational in suggesting that the realization of oneself is inseparable from the realization of others.

  6. Localized Smart-Interpretation

    Science.gov (United States)

    Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom

    2014-05-01

    The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f

  7. Molecular cloning and construction of the coding region for human acetylcholinesterase reveals a G + C-rich attenuating structure

    International Nuclear Information System (INIS)

    Soreq, H.; Ben-Aziz, R.; Prody, C.A.; Seidman, S.; Gnatt, A.; Neville, L.; Lieman-Hurwitz, J.; Lev-Lehman, E.; Ginzberg, D.; Lapidot-Lifson, Y.; Zakut, H.

    1990-01-01

    To study the primary structure of human acetylcholinesterase and its gene expression and amplification, cDNA libraries from human tissues expressing oocyte-translatable AcChoEase mRNA were constructed and screened with labeled oligodeoxynucleotide probes. Several cDNA clones were isolated that encoded a polypeptide with ≥50% identically aligned amino acids to Torpedo AcChoEase and human butyrylcholinesterase. However, these cDNA clones were all truncated within a 300-nucleotide-long G + C-rich region with a predicted pattern of secondary structure having a high Gibbs free energy downstream from the expected 5' end of the coding region. Screening of a genomic DNA library revealed the missing 5' domain. When ligated to the cDNA and constructed into a transcription vector, this sequence encoded a synthetic mRNA translated in microinjected oocytes into catalytically active AcChoEase with marked preference for acetylthiocholine over butyrylthiocholine as a substrate, susceptibility to inhibition by the AcChoEase inhibitor BW284C51, and resistance to the AcChoEase inhibitor tetraisopropylpyrophosphoramide. Blot hybridization of genomic DNA from different individuals carrying amplified AcChoEase genes revealed variable intensities and restriction patterns with probes from the regions upstream and downstream from the predicted G + C-rich structure. Thus, the human AcChoEase gene includes a putative G + C-rich attenuator domain and is subject to structural alterations in cases of AcChoEase gene amplification

  8. CT colonography: accuracy of initial interpretation by radiographers in routine clinical practice

    International Nuclear Information System (INIS)

    Burling, D.; Wylie, P.; Gupta, A.; Illangovan, R.; Muckian, J.; Ahmad, R.; Marshall, M.; Taylor, S.A.

    2010-01-01

    Aim: To investigate performance of computed-assisted detection (CAD)-assisted radiographers interpreting computed tomography colonography (CTC) in routine practice. Materials and methods: Three hundred and three consecutive symptomatic patients underwent CTC. Examinations were double-read by trained radiographers using primary two-dimensional/three-dimensional (2D/3D) analysis supplemented by 'second reader' CAD. Radiographers recorded colonic neoplasia, interpretation times, and patient management strategy code (S0, inadequate; S1, normal; S2, 6-9 mm polyp; S3, ≥10 mm polyp; S4, cancer; S5, diverticular stricture) for each examination. Strategies were compared to the reference standard using kappa statistic, interpretation times using paired t-test, learning curves using logistic regression and Pearson's correlation coefficient. Results: Of 303 examinations, 69 (23%) were abnormal. CAD-assisted radiographers detected 17/17 (100%) cancers, 21/28 (72%) polyps ≥10 mm and 42/60 (70%) 6-9 mm polyps. The overall agreement between radiographers and the reference management strategy was good (kappa 0.72; CI: 0.65, 0.78) with agreement for S1 strategy in 189/211 (90%) exams; S2 in 19/27 (70%); S3 in 12/19 (63%); S4 in 17/17 (100%); S5 in 5/6 (83%). The mean interpretation time was 17 min (SD = 11) compared with 8 min (SD = 3.5) for radiologists. There was no learning curve for recording correct strategies (OR 0.88; p = 0.12) but a significant reduction in interpretation times, mean 14 and 31 min (last/first 50 exams; -0.46; p < 0.001). Conclusion: Routine CTC interpretation by radiographers is effective for initial triage of patients with cancer, but independent reporting is currently not recommended.

  9. THE APPEAL CALLED “EMBARGOS DE DECLARAÇÃO” IN ELECTORAL PROCESS: A BRIEF VIEW AFTER THE BRAZILIAN NEW CIVIL PROCEDURE CODE

    Directory of Open Access Journals (Sweden)

    Rodrigo Mazzei

    2016-12-01

    Full Text Available This paper analyzes the regulation of embargos de declaração with in the electoral process, as well as the interpretation that has been given by the Courts. Addressing essencial issues of embargos de declaração, as the deadline, legal nature, suitability hypothesis and suspensive effect, many of which are the subject of discussion in doctrine and jurisprudence, mainly due to diversification and variety of rules dealing with the subject (Electoral Code, Regiments Internal of Courts and Civil Procedure Code and Criminal Procedure Code - alternatively applied, besides the need for a constitutional interpretation focused on the embargos de declaração. Observes the proposals of the Project of the New CPC, pending in the legislative, for the regulation of embargos de declaração and the impacts that this new text will bring to the electoral process, pointing out possible ways to conciliation between the “new” civil process and the electoral law.

  10. “GAY RIGHTS ARE HUMAN RIGHTS”: : THE FRAMING OF NEW INTERPRETATIONS OF INTERNATIONAL HUMAN RIGHTS NORMS

    NARCIS (Netherlands)

    Holzhacker, Ron

    2014-01-01

    “Gay Rights are Human Rights” may have begun as a slogan chanted in the street, but academics and human rights organizations began to use the international human rights frame systematically in the 1990s to argue for universal human rights to fully apply to LGBT persons. This framing gradually began

  11. Interpretation of Quantum Mechanics. A view of our universe

    Science.gov (United States)

    Lindgren, Ingvar

    2009-10-01

    The interpretation of quantum mechanics has been disputed ever since the advent of the theory in the 1920's. Famous are the discussions over long time between Einstein and Bohr. Einstein refused to accept the so-called Copenhagen interpretation, where the wave function collapses at a measurement and where the outcome of the measurement is essentially accidental (``God does not play dice''). Alternative interpretations have appeared, but the Copenhagen school has dominated the thoughts throughout the decades. One interesting interpretation was formulated in 1957 by Hugh Everett at Princeton, a student of John Wheeler, which abandons the wave-function collapse. In this model the universe is governed entirely by the Schrödinger equation, which does not allow for any collapse. In Everett's model after a measurement the wave function is separated into different branches that do not interact. This model was left unnoticed for long time until Bryce DeWitt took it up in 1970 and termed it ``Many-Worlds Interpretation'', a term that in some sense is misleading. Everett's model is incomplete, and it was later supplemented by the theory of decoherence, which explains how the different branches decouple as a result of the interaction with the environment. This extended model has in recent years gained increased respect, and some believe that it is the only model made available so far that is fully consistent with quantum mechanics. This interpretation can also shed some light on the development of the universe and, in particular, on the so-called Anthropic principle, which puts human beings at the center of the development.

  12. The noncoding human genome and the future of personalised medicine.

    Science.gov (United States)

    Cowie, Philip; Hay, Elizabeth A; MacKenzie, Alasdair

    2015-01-30

    Non-coding cis-regulatory sequences act as the 'eyes' of the genome and their role is to perceive, organise and relay cellular communication information to RNA polymerase II at gene promoters. The evolution of these sequences, that include enhancers, silencers, insulators and promoters, has progressed in multicellular organisms to the extent that cis-regulatory sequences make up as much as 10% of the human genome. Parallel evidence suggests that 75% of polymorphisms associated with heritable disease occur within predicted cis-regulatory sequences that effectively alter the 'perception' of cis-regulatory sequences or render them blind to cell communication cues. Cis-regulatory sequences also act as major functional targets of epigenetic modification thus representing an important conduit through which changes in DNA-methylation affects disease susceptibility. The objectives of the current review are (1) to describe what has been learned about identifying and characterising cis-regulatory sequences since the sequencing of the human genome; (2) to discuss their role in interpreting cell signalling pathways pathways; and (3) outline how this role may be altered by polymorphisms and epigenetic changes. We argue that the importance of the cis-regulatory genome for the interpretation of cellular communication pathways cannot be overstated and understanding its role in health and disease will be critical for the future development of personalised medicine.

  13. Brain cDNA clone for human cholinesterase

    International Nuclear Information System (INIS)

    McTiernan, C.; Adkins, S.; Chatonnet, A.; Vaughan, T.A.; Bartels, C.F.; Kott, M.; Rosenberry, T.L.; La Du, B.N.; Lockridge, O.

    1987-01-01

    A cDNA library from human basal ganglia was screened with oligonucleotide probes corresponding to portions of the amino acid sequence of human serum cholinesterase. Five overlapping clones, representing 2.4 kilobases, were isolated. The sequenced cDNA contained 207 base pairs of coding sequence 5' to the amino terminus of the mature protein in which there were four ATG translation start sites in the same reading frame as the protein. Only the ATG coding for Met-(-28) lay within a favorable consensus sequence for functional initiators. There were 1722 base pairs of coding sequence corresponding to the protein found circulating in human serum. The amino acid sequence deduced from the cDNA exactly matched the 574 amino acid sequence of human serum cholinesterase, as previously determined by Edman degradation. Therefore, our clones represented cholinesterase rather than acetylcholinesterase. It was concluded that the amino acid sequences of cholinesterase from two different tissues, human brain and human serum, were identical. Hybridization of genomic DNA blots suggested that a single gene, or very few genes coded for cholinesterase

  14. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  15. Analyses with the FSTATE code: fuel performance in destructive in-pile experiments

    International Nuclear Information System (INIS)

    Bauer, T.H.; Meek, C.C.

    1982-01-01

    Thermal-mechanical analysis of a fuel pin is an essential part of the evaluation of fuel behavior during hypothetical accident transients. The FSTATE code has been developed to provide this required computational ability in situations lacking azimuthal symmetry about the fuel-pin axis by performing 2-dimensional thermal, mechanical, and fission gas release and redistribution computations for a wide range of possible transient conditions. In this paper recent code developments are described and application is made to in-pile experiments undertaken to study fast-reactor fuel under accident conditions. Three accident simulations, including a fast and slow ramp-rate overpower as well as a loss-of-cooling accident sequence, are used as representative examples, and the interpretation of STATE computations relative to experimental observations is made

  16. Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation

    Science.gov (United States)

    Ragni, Matteo

    There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.

  17. Mr.CAS—A minimalistic (pure Ruby CAS for fast prototyping and code generation

    Directory of Open Access Journals (Sweden)

    Matteo Ragni

    2017-01-01

    Full Text Available There are Computer Algebra System (CAS systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.

  18. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  19. Novel classes of non-coding RNAs and cancer

    Directory of Open Access Journals (Sweden)

    Sana Jiri

    2012-05-01

    Full Text Available Abstract For the many years, the central dogma of molecular biology has been that RNA functions mainly as an informational intermediate between a DNA sequence and its encoded protein. But one of the great surprises of modern biology was the discovery that protein-coding genes represent less than 2% of the total genome sequence, and subsequently the fact that at least 90% of the human genome is actively transcribed. Thus, the human transcriptome was found to be more complex than a collection of protein-coding genes and their splice variants. Although initially argued to be spurious transcriptional noise or accumulated evolutionary debris arising from the early assembly of genes and/or the insertion of mobile genetic elements, recent evidence suggests that the non-coding RNAs (ncRNAs may play major biological roles in cellular development, physiology and pathologies. NcRNAs could be grouped into two major classes based on the transcript size; small ncRNAs and long ncRNAs. Each of these classes can be further divided, whereas novel subclasses are still being discovered and characterized. Although, in the last years, small ncRNAs called microRNAs were studied most frequently with more than ten thousand hits at PubMed database, recently, evidence has begun to accumulate describing the molecular mechanisms by which a wide range of novel RNA species function, providing insight into their functional roles in cellular biology and in human disease. In this review, we summarize newly discovered classes of ncRNAs, and highlight their functioning in cancer biology and potential usage as biomarkers or therapeutic targets.

  20. New Advances in Photoionisation Codes: How and what for?

    International Nuclear Information System (INIS)

    Ercolano, Barbara

    2005-01-01

    The study of photoionised gas in planetary nebulae (PNe) has played a major role in the achievement, over the years, of a better understanding of a number of physical processes, pertinent to a broader range of fields than that of PNe studies, spanning from atomic physics to stellar evolution theories. Whilst empirical techniques are routinely employed for the analysis of the emission line spectra of these objects, the accurate interpretation of the observational data often requires the solution of a set of coupled equations, via the application of a photoionisation/plasma code. A number of large-scale codes have been developed since the late sixties, using various analytical or statistical techniques for the transfer of continuum radiation, mainly under the assumption of spherical symmetry and a few in 3D. These codes have been proved to be powerful and in many cases essential tools, but a clear idea of the underlying physical processes and assumptions is necessary in order to avoid reaching misleading conclusions.The development of the codes over the years has been driven by the observational constraints available, but also compromised by the available computer power. Modern codes are faster and more flexible, with the ultimate goal being the achievement of a description of the observations relying on the smallest number of parameters possible. In this light recent developments have been focused on the inclusion of new available atomic data, the inclusion of a realistic treatment for dust grains mixed in the ionised and photon dominated regions (PDRs) and the expansion of some codes to PDRs with the inclusion of chemical reaction networks. Furthermore the last few years have seen the development of fully 3D photoionisation codes based on the Monte Carlo method.A brief review of the field of photoionisation today is given here, with emphasis on the recent developments, including the expansion of the models to the 3D domain. Attention is given to the identification

  1. At the intersection of non-coding transcription, DNA repair, chromatin structure, and cellular senescence

    Directory of Open Access Journals (Sweden)

    Ryosuke eOhsawa

    2013-07-01

    Full Text Available It is well accepted that non-coding RNAs play a critical role in regulating gene expression. Recent paradigm-setting studies are now revealing that non-coding RNAs, other than microRNAs, also play intriguing roles in the maintenance of chromatin structure, in the DNA damage response, and in adult human stem cell aging. In this review, we will discuss the complex inter-dependent relationships among non-coding RNA transcription, maintenance of genomic stability, chromatin structure and adult stem cell senescence. DNA damage-induced non-coding RNAs transcribed in the vicinity of the DNA break regulate recruitment of the DNA damage machinery and DNA repair efficiency. We will discuss the correlation between non-coding RNAs and DNA damage repair efficiency and the potential role of changing chromatin structures around double-strand break sites. On the other hand, induction of non-coding RNA transcription from the repetitive Alu elements occurs during human stem cell aging and hinders efficient DNA repair causing entry into senescence. We will discuss how this fine balance between transcription and genomic instability may be regulated by the dramatic changes to chromatin structure that accompany cellular senescence.

  2. Human Rights Texts: Converting Human Rights Primary Source Documents into Data.

    Science.gov (United States)

    Fariss, Christopher J; Linder, Fridolin J; Jones, Zachary M; Crabtree, Charles D; Biek, Megan A; Ross, Ana-Sophia M; Kaur, Taranamol; Tsai, Michael

    2015-01-01

    We introduce and make publicly available a large corpus of digitized primary source human rights documents which are published annually by monitoring agencies that include Amnesty International, Human Rights Watch, the Lawyers Committee for Human Rights, and the United States Department of State. In addition to the digitized text, we also make available and describe document-term matrices, which are datasets that systematically organize the word counts from each unique document by each unique term within the corpus of human rights documents. To contextualize the importance of this corpus, we describe the development of coding procedures in the human rights community and several existing categorical indicators that have been created by human coding of the human rights documents contained in the corpus. We then discuss how the new human rights corpus and the existing human rights datasets can be used with a variety of statistical analyses and machine learning algorithms to help scholars understand how human rights practices and reporting have evolved over time. We close with a discussion of our plans for dataset maintenance, updating, and availability.

  3. Beyond a code of ethics: phenomenological ethics for everyday practice.

    Science.gov (United States)

    Greenfield, Bruce; Jensen, Gail M

    2010-06-01

    Physical therapy, like all health-care professions, governs itself through a code of ethics that defines its obligations of professional behaviours. The code of ethics provides professions with a consistent and common moral language and principled guidelines for ethical actions. Yet, and as argued in this paper, professional codes of ethics have limits applied to ethical decision-making in the presence of ethical dilemmas. Part of the limitations of the codes of ethics is that there is no particular hierarchy of principles that govern in all situations. Instead, the exigencies of clinical practice, the particularities of individual patient's illness experiences and the transformative nature of chronic illnesses and disabilities often obscure the ethical concerns and issues embedded in concrete situations. Consistent with models of expert practice, and with contemporary models of patient-centred care, we advocate and describe in this paper a type of interpretative and narrative approach to moral practice and ethical decision-making based on phenomenology. The tools of phenomenology that are well defined in research are applied and examined in a case that illustrates their use in uncovering the values and ethical concerns of a patient. Based on the deconstruction of this case on a phenomenologist approach, we illustrate how such approaches for ethical understanding can help assist clinicians and educators in applying principles within the context and needs of each patient. (c) 2010 John Wiley & Sons, Ltd.

  4. 77 FR 14022 - Guidance for Industry: Testing for Salmonella Species in Human Foods and Direct-Human-Contact...

    Science.gov (United States)

    2012-03-08

    ...-contact animal foods, and the interpretation of test results, when the presence of Salmonella spp. in the... eggs) and direct-human-contact animal foods, and the interpretation of test results, when the presence... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2011-D-0091...

  5. Dual coding: a cognitive model for psychoanalytic research.

    Science.gov (United States)

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code

  6. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  7. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  8. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  9. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  10. Treasure Transformers: Novel Interpretative Installations for the National Palace Museum

    Science.gov (United States)

    Hsieh, Chun-Ko; Liu, I.-Ling; Lin, Quo-Ping; Chan, Li-Wen; Hsiao, Chuan-Heng; Hung, Yi-Ping

    Museums have missions to increase accessibility and share cultural assets to the public. The National Palace Museum intends to be a pioneer of utilizing novel interpretative installations to reach more diverse and potential audiences, and Human-Computer Interaction (HCI) technology has been selected as the new interpretative approach. The pilot project in partnership with the National Taiwan University has successfully completed four interactive installations. To consider the different nature of collections, the four systems designed against different interpretation strategies are uPoster, i-m-Top, Magic Crystal Ball and Virtual Panel. To assess the feasibility of the project, the interactive installations were exhibited at the Taipei World Trade Center in 2008. The purpose of this paper is to present the development of the "Treasure Transformers" exhibition, design principles, and effectiveness of installations from the evaluation. It is our ambition that the contributions will propose innovative media approaches in museum settings.

  11. Expression of Mitochondrial Non-coding RNAs (ncRNAs) Is Modulated by High Risk Human Papillomavirus (HPV) Oncogenes*

    Science.gov (United States)

    Villota, Claudio; Campos, América; Vidaurre, Soledad; Oliveira-Cruz, Luciana; Boccardo, Enrique; Burzio, Verónica A.; Varas, Manuel; Villegas, Jaime; Villa, Luisa L.; Valenzuela, Pablo D. T.; Socías, Miguel; Roberts, Sally; Burzio, Luis O.

    2012-01-01

    The study of RNA and DNA oncogenic viruses has proved invaluable in the discovery of key cellular pathways that are rendered dysfunctional during cancer progression. An example is high risk human papillomavirus (HPV), the etiological agent of cervical cancer. The role of HPV oncogenes in cellular immortalization and transformation has been extensively investigated. We reported the differential expression of a family of human mitochondrial non-coding RNAs (ncRNAs) between normal and cancer cells. Normal cells express a sense mitochondrial ncRNA (SncmtRNA) that seems to be required for cell proliferation and two antisense transcripts (ASncmtRNAs). In contrast, the ASncmtRNAs are down-regulated in cancer cells. To shed some light on the mechanisms that trigger down-regulation of the ASncmtRNAs, we studied human keratinocytes (HFK) immortalized with HPV. Here we show that immortalization of HFK with HPV-16 or 18 causes down-regulation of the ASncmtRNAs and induces the expression of a new sense transcript named SncmtRNA-2. Transduction of HFK with both E6 and E7 is sufficient to induce expression of SncmtRNA-2. Moreover, E2 oncogene is involved in down-regulation of the ASncmtRNAs. Knockdown of E2 in immortalized cells reestablishes in a reversible manner the expression of the ASncmtRNAs, suggesting that endogenous cellular factors(s) could play functions analogous to E2 during non-HPV-induced oncogenesis. PMID:22539350

  12. Expression of mitochondrial non-coding RNAs (ncRNAs) is modulated by high risk human papillomavirus (HPV) oncogenes.

    Science.gov (United States)

    Villota, Claudio; Campos, América; Vidaurre, Soledad; Oliveira-Cruz, Luciana; Boccardo, Enrique; Burzio, Verónica A; Varas, Manuel; Villegas, Jaime; Villa, Luisa L; Valenzuela, Pablo D T; Socías, Miguel; Roberts, Sally; Burzio, Luis O

    2012-06-15

    The study of RNA and DNA oncogenic viruses has proved invaluable in the discovery of key cellular pathways that are rendered dysfunctional during cancer progression. An example is high risk human papillomavirus (HPV), the etiological agent of cervical cancer. The role of HPV oncogenes in cellular immortalization and transformation has been extensively investigated. We reported the differential expression of a family of human mitochondrial non-coding RNAs (ncRNAs) between normal and cancer cells. Normal cells express a sense mitochondrial ncRNA (SncmtRNA) that seems to be required for cell proliferation and two antisense transcripts (ASncmtRNAs). In contrast, the ASncmtRNAs are down-regulated in cancer cells. To shed some light on the mechanisms that trigger down-regulation of the ASncmtRNAs, we studied human keratinocytes (HFK) immortalized with HPV. Here we show that immortalization of HFK with HPV-16 or 18 causes down-regulation of the ASncmtRNAs and induces the expression of a new sense transcript named SncmtRNA-2. Transduction of HFK with both E6 and E7 is sufficient to induce expression of SncmtRNA-2. Moreover, E2 oncogene is involved in down-regulation of the ASncmtRNAs. Knockdown of E2 in immortalized cells reestablishes in a reversible manner the expression of the ASncmtRNAs, suggesting that endogenous cellular factors(s) could play functions analogous to E2 during non-HPV-induced oncogenesis.

  13. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  14. A Nordic Approach to the Interpretation of the European Convention on Human Rights?

    DEFF Research Database (Denmark)

    Rytter, Jens Elo

    2017-01-01

    Based on dissenting opinions involvin gnordic judges, the article draws the contours of a common Nordic approach to the Interpretation of the ECHR, chcaracterized by reluctance to move beyond textual limits, reliance on preparatory works, and resistance towards imposing obligations in the political...

  15. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  16. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  17. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors.

    Science.gov (United States)

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  18. Visual Sexual Stimuli—Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors

    Science.gov (United States)

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS. PMID:27574507

  19. Safe operation of research reactors and critical assemblies. Code of practice and annexes. 1984 ed

    International Nuclear Information System (INIS)

    1984-01-01

    The safe operation of research reactors and critical assemblies (hereafter termed 'reactors') requires proper design, construction, management and supervision. This Code of Practice deals mainly with management and supervision. The provisions of the Code apply to the whole life of the reactor, including modification, updating and upgrading. The Code may be subject to revision in the light of experience and the state of technology. The Code is aimed at defining minimum requirements for the safe operation of reactors. Emphasis is placed on which safety requirements should be met rather than on specifying how these requirements may be met. The Code also provides guidance and information to persons and authorities responsible for the operation of reactors. The Code recommends that documents dealing with the operation of reactors and including safety analyses be prepared and submitted for review and approval to a regulatory body. Operation would be authorized on the understanding that it would comply with limits and conditions designed to ensure safety. The Code covers a wide range of reactor types, which gives rise to a variety of safety issues. Safety issues applicable to specific reactor types only (e.g. fast reactors) are not necessarily covered in this Code. Some of the recommendations in the Code are not directly applicable to critical assemblies. A recommendation may therefore be interpreted according to the type of reactor concerned. In such cases the words 'adequate' and 'appropriate' are used to mean 'adequate' or 'appropriate' for the type of reactor under consideration.

  20. A comparison study of the 1MeV triton burn-up in JET using the HECTOR and SOCRATE codes

    International Nuclear Information System (INIS)

    Gorini, G.; Kovanen, M.A.

    1988-01-01

    The burn-up of the 1MeV tritons in deuterium plasmas has been measured in JET for various plasma conditions. To interpret these measurements the containment, slowing down and burn-up of fast tritons needs to be modelled with a reasonable accuracy. The numerical code SOCRATE has been written for this specific purpose and a second code, HECTOR, has been adapted to study the triton burn-up problem. In this paper we compare the results from the two codes in order to exclude possible errors in the numerical models, to assess their accuracy and to study the sensitivity of the calculation to various physical effects. (author)

  1. Development of TIME2 code

    International Nuclear Information System (INIS)

    1986-02-01

    The paper reviews the progress on the development of a computer model TIME2, for modelling the long term evolution of shallow burial site environments for low- and intermediate-level radioactive waste disposal. The subject is discussed under the five topic headings: 1) background studies, including geomorphology, climate, human-induced effects, and seismicity, 2) development of the TIME2 code, 3) verification and testing, 4) documentation, and, 5) role of TIME2 in radiological risk assessment. (U.K.)

  2. Evaluation of Veterinary-Specific Interpretive Criteria for Susceptibility Testing of Streptococcus equi Subspecies with Trimethoprim-Sulfamethoxazole and Trimethoprim-Sulfadiazine

    DEFF Research Database (Denmark)

    Sadaka, Carmen; Kanellos, Theo; Guardabassi, Luca

    2017-01-01

    Antimicrobial susceptibility test results for trimethoprim-sulfadiazine with Streptococcus equi subspecies are interpreted based on human data for trimethoprim-sulfamethoxazole. The veterinary-specific data generated in this study support a single breakpoint for testing trimethoprim-sulfamethoxaz......Antimicrobial susceptibility test results for trimethoprim-sulfadiazine with Streptococcus equi subspecies are interpreted based on human data for trimethoprim-sulfamethoxazole. The veterinary-specific data generated in this study support a single breakpoint for testing trimethoprim...

  3. Correlates of undefined cause of injury coded mortality data in Australia.

    Science.gov (United States)

    McKenzie, Kirsten; Chen, Linping; Walker, Susan M

    The objective of this research was to identify the level of detail regarding the external causes of death in Australia and ascertain problematic areas where data quality improvement efforts may be focused. The 2003 national mortality dataset of 12,591 deaths with an external cause of injury as the underlying cause of death (UCOD) or multiple cause of death (MCOD) based on ICD-10 code assignment from death certificate information was obtained. Logistic regression models were used to examine the precision of coded external cause of injury data. It was found that overall, accidents were the most poorly defined of all intent code blocks with over 30% of accidents being undefined, representing 2,314 deaths in 2003. More undefined codes were identified in MCOD data than for UCOD data. Deaths certified by doctors were more likely to use undefined codes than deaths certified by a coroner or government medical office. To improve the quality of external cause of injuries leading to or associated with death, certifiers need to be made aware of the importance of documenting all information pertaining to the cause of the injury and the intent behind the incident, either through education or more explicit instructions on the death certificate and accompanying instructional materials. It is important that researchers are aware of the validity of the data when they make interpretations as to the underlying causes of fatal injuries and causes of injury associated with deaths.

  4. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  5. Heritage Interpretation and Presentation Practices in Tigray, Northern Ethiopia: Cases from the Wukro Tourism Cluster

    Directory of Open Access Journals (Sweden)

    Gebrekiros Welegebriel Asfaw

    2016-12-01

    Full Text Available Interpretation and presentation of heritage is becoming a major challenge as important elements of human culture are misinterpreted and vanishing throughout the globe. It is only when the heritages are sustainably interpreted that tourism can be developed in a sustainable manner. The major purpose of this study is to investigate the practices of heritage interpretation and presentation in Tigray with a case from Wukro Tourism Cluster. Descriptive type of research design inculcating both quantitative and qualitative research methods was employed for empirical investigation. Questionnaire, interview and observation were the main instruments of primary data collections. Primary data was collected from 134 respondents (120 questionnaires and 14 interviews. Findings of the study reveal that the practices of heritage interpretation and presentation in Wukro Cluster are embedded with different pitfalls. A lot of unfavorable factors like limited capacity of heritage interpreters, scant attention to community based heritage interpretation, problems in variety and quality of visitor experiences, problems with stakeholder cooperation, lack of organized interpretation and presentation, problems in the adequacy and quality of interpretation infrastructures and others. Developing appropriate interpretation system, preparing different interpretation and presentation infrastructures and introducing common practices of visitor management can be good remedies.

  6. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  7. QUALITATIVE INTERPRETATION OF GALAXY SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Almeida, J.; Morales-Luis, A. B. [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain); Terlevich, R.; Terlevich, E. [Instituto Nacional de Astrofisica, Optica y Electronica, Tonantzintla, Puebla (Mexico); Cid Fernandes, R., E-mail: jos@iac.es, E-mail: abml@iac.es, E-mail: rjt@ast.cam.ac.uk, E-mail: eterlevi@inaoep.mx, E-mail: cid@astro.ufsc.br [Departamento de Fisica-CFM, Universidade Federal de Santa Catarina, P.O. Box 476, 88040-900 Florianopolis, SC (Brazil)

    2012-09-10

    We describe a simple step-by-step guide to qualitative interpretation of galaxy spectra. Rather than an alternative to existing automated tools, it is put forward as an instrument for quick-look analysis and for gaining physical insight when interpreting the outputs provided by automated tools. Though the recipe is for general application, it was developed for understanding the nature of the Automatic Spectroscopic K-means-based (ASK) template spectra. They resulted from the classification of all the galaxy spectra in the Sloan Digital Sky Survey data release 7, thus being a comprehensive representation of the galaxy spectra in the local universe. Using the recipe, we give a description of the properties of the gas and the stars that characterize the ASK classes, from those corresponding to passively evolving galaxies, to H II galaxies undergoing a galaxy-wide starburst. The qualitative analysis is found to be in excellent agreement with quantitative analyses of the same spectra. We compare the mean ages of the stellar populations with those inferred using the code STARLIGHT. We also examine the estimated gas-phase metallicity with the metallicities obtained using electron-temperature-based methods. A number of byproducts follow from the analysis. There is a tight correlation between the age of the stellar population and the metallicity of the gas, which is stronger than the correlations between galaxy mass and stellar age, and galaxy mass and gas metallicity. The galaxy spectra are known to follow a one-dimensional sequence, and we identify the luminosity-weighted mean stellar age as the affine parameter that describes the sequence. All ASK classes happen to have a significant fraction of old stars, although spectrum-wise they are outshined by the youngest populations. Old stars are metal-rich or metal-poor depending on whether they reside in passive galaxies or in star-forming galaxies.

  8. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  9. INTERFEROMETRIC SYNTHETIC APERTURE RADAR (INSAR TECHNOLOGY AND GEOMORPHOLOGY INTERPRETATION

    Directory of Open Access Journals (Sweden)

    M. Maghsoudi

    2013-09-01

    Full Text Available Geomorphology is briefly the study of landforms and their formative processes on the surface of the planet earth as human habitat. The landforms evolution and the formative processes can best be studied by technologies with main application in study of elevation. Interferometric Synthetic Aperture Radar (InSAR is the appropriate technology for this application. With phase differences calculations in radar waves, the results of this technology can extensively be interpreted for geomorphologic researches. The purpose of the study is to review the geomorphologic studies using InSAR and also the technical studies about InSAR with geomorphologic interpretations. This study states that the InSAR technology can be recommended to be employed as a fundamental for geomorphology researches.

  10. Theory and interpretation in qualitative studies from general practice

    DEFF Research Database (Denmark)

    Malterud, Kirsti

    2016-01-01

    Objective: In this article, I want to promote theoretical awareness and commitment among qualitative researchers in general practice and suggest adequate and feasible theoretical approaches.  Approach: I discuss different theoretical aspects of qualitative research and present the basic foundations...... theory is a consistent and soundly based set of assumptions about a specific aspect of the world, predicting or explaining a phenomenon. Qualitative research is situated in an interpretative paradigm where notions about particular human experiences in context are recognized from different subject...... in qualitative analysis are presented, emphasizing substantive theories to sharpen the interpretative focus. Such approaches are clearly within reach for a general practice researcher contributing to clinical practice by doing more than summarizing what the participants talked about, without trying to become...

  11. Development of computer code models for analysis of subassembly voiding in the LMFBR

    International Nuclear Information System (INIS)

    Hinkle, W.

    1979-12-01

    The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered

  12. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  13. 77 FR 49818 - Agency Information Collection Activities; Proposed Collection; Comment Request; Bar Code Label...

    Science.gov (United States)

    2012-08-17

    ...] Agency Information Collection Activities; Proposed Collection; Comment Request; Bar Code Label... allow 60 days for public comment in response to the notice. This notice solicits comments on the bar... technology. Bar Code Label Requirement for Human Drug and Biological Products--(OMB Control Number 0910-0537...

  14. Development and assessment of ASTEC code for severe accident simulation

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Pignet, S.; Seropian, C.; Montanelli, T.; Giordano, P.; Jacq, F.; Schwinges, B.

    2005-01-01

    Full text of publication follows: The ASTEC integral code, jointly developed by IRSN and GRS since several years for evaluation of source term during a severe accident (SA) in a Light Water Reactor, will play a central role in the SARNET network of excellence of the 6. Framework Programme (FwP) of the European Commission which started in spring 2004. It should become the reference European SA integral code in the next years. The version V1.1, released in June 2004, allows to model most of the main physical phenomena (except steam explosion) near or at the state of the art. In order to allow to study a great number of scenarios, a compromise must be found between precision of results and calculation time: one day of accident time usually takes less than one day of real time to be simulated on a PC computer. Important efforts are being made on validation by covering more than 30 reference experiments, often International Standard Problems from OECD (CORA, LOFT, PACTEL, BETA, VANAM, ACE-RTF, Phebus.FPT1...). The code is also used for the detailed interpretation of all the integral Phebus.FP experiments. Eighteen European partners performed a first independent evaluation of the code capabilities in 2000-03 within the frame of the EVITA 5. FwP project on one hand by comparison to experiments and on another hand by benchmarking with MAAP4 and MELCOR integral codes on plant applications on PWR and VVER. Their main conclusions were the needs of improvement of code robustness (especially the 2 new modules CESAR and DIVA simulating respectively circuit thermal hydraulics and core degradation) and of post-processing tools. Some improvements have already been achieved in the latest version V 1.1 on these two aspects. A new module MEDICIS devoted to Molten Core Concrete Interaction (MCCI) is implemented in this version, with a tight coupling to the containment thermal hydraulics module CPA. The paper presents a detailed analysis of a TMLB sequence on a French 900 MWe PWR, from

  15. FUNDAMENTAL RULES OF THE CIVIL PROCEDURE CODE OF 2015: BRIEF REFLECTIONS

    Directory of Open Access Journals (Sweden)

    Aluisio Gonçalves de Castro Mendes

    2016-12-01

    Full Text Available This paper aims to bring some brief reflections on the topics covered in Book I of the General Part in the Code of Civil Procedure of 2015, in each of the twelve articles related to the basic standards, highlighting its relevance to the application and the proper interpretation of new legislation. The intention is not to exhaust the subject, but just to open the debate to bring a few lines to improve the theoretical understanding and improving the judicial services.

  16. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  17. Amino acid codes in mitochondria as possible clues to primitive codes

    Science.gov (United States)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  18. Dual-Coding Theory and Connectionist Lexical Selection

    OpenAIRE

    Wang, Ye-Yi

    1994-01-01

    We introduce the bilingual dual-coding theory as a model for bilingual mental representation. Based on this model, lexical selection neural networks are implemented for a connectionist transfer project in machine translation. This lexical selection approach has two advantages. First, it is learnable. Little human effort on knowledge engineering is required. Secondly, it is psycholinguistically well-founded.

  19. Quo usque tandem! An interpretation of “genocide” that deprives the Convention of its appropriate effects

    Directory of Open Access Journals (Sweden)

    Favio Farinella

    2016-12-01

    Full Text Available In its recent judgment on the case “Application of the Convention for the Prevention and Punishment of the Crime of Genocide”, the International Court of Justice missed a historic opportunity to extend the margin of appreciation regarding the protection of individuals and Humanity. The ICJ consolidates a strict and restricted interpretation of the crime of genocide, contrary to the principles of International Human Rights Law and the precedents of the courts specialized in International Humanitarian Law and International Criminal Law. The opinion of the majority the defense of State sovereignty is placed above the humanitarian obligations of the State and gross violations of human rights. In Cançado Trindade’s words, the reason of State (raison d'Etat prevailed over reasons of Humanity. This paper is intended to demonstrate that the interpretation of the crime of genocide by the ICJ deprives the thematic Convention of its appropriate effects which must be aimed at the protection of the human being. 

  20. Schrodinger's mechanics interpretation

    CERN Document Server

    Cook, David B

    2018-01-01

    The interpretation of quantum mechanics has been in dispute for nearly a century with no sign of a resolution. Using a careful examination of the relationship between the final form of classical particle mechanics (the Hamilton–Jacobi Equation) and Schrödinger's mechanics, this book presents a coherent way of addressing the problems and paradoxes that emerge through conventional interpretations.Schrödinger's Mechanics critiques the popular way of giving physical interpretation to the various terms in perturbation theory and other technologies and places an emphasis on development of the theory and not on an axiomatic approach. When this interpretation is made, the extension of Schrödinger's mechanics in relation to other areas, including spin, relativity and fields, is investigated and new conclusions are reached.

  1. The importance of interpreting the commercial contract in scheduling the production of a Romanian business operator

    Directory of Open Access Journals (Sweden)

    Laura MUREŞAN (POŢINCU

    2015-12-01

    Full Text Available Part of scheduling the production is complying with the conditions of the commercial contract, which is the foundation of the relation between the client and the business operator that performs its activity in the area of the industrial production. In case the compliance with the contractual obligations raises issues, it is required to interpret the commercial contract according to the provisions of the Romanian Civil Code. This work aims at presenting and analyzing these rules of interpreting the commercial contract, which are not provided in the special stipulations of the law applied to the business field, so that one shall consider the rules common to the private law, i.e. the juridical norms provided by the legislation of the civil law.

  2. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  3. Cloning of cDNAs coding for the heavy chain region and connecting region of human factor V, a blood coagulation factor with four types of internal repeats

    International Nuclear Information System (INIS)

    Kane, W.H.; Ichinose, A.; Hagen, F.S.; Davie, E.W.

    1987-01-01

    Human factor V is a high molecular weight plasma glycoprotein that participates as a cofactor in the conversion of prothrombin to thrombin by factor X/sub a/. Prior to its participation in the coagulation cascade, factor V is converted to factor V/sub a/ by thrombin generating a heavy chain and a light chain, and these two chains are held together by calcium ions. A connecting region originally located between the heavy and light chains is liberated during the activation reaction. In a previous study, a cDNA of 2970 nucleotides that codes for the carboxyl-terminal 938 amino acids of factor V was isolated and characterized from a Hep G2 cDNA library. This cDNA has been used to obtain additional clones from Hep G2 and human liver cDNA libraries. Furthermore, a Hep G2 cDNA library prepared with an oligonucleotide from the 5' end of these cDNAs was screened to obtain overlapping cDNA clones that code for the amino-terminal region of the molecule. The composite sequence of these clones spans 6911 nucleotides and is consistent with the size of the factor V message present in Hep G2 cells (approximately 7 kilobases). The cDNA codes for a leader sequence of 28 amino acids and a mature protein of 2196 amino acids. The amino acid sequence predicted from the cDNA was in complete agreement with 139 amino acid residues that were identified by Edman degradation of cyanogen bromide peptides isolated from the heavy chain region and connecting region of plasma factor V. The domain structure of human factor V is similar to that previously reported for human coagulation factor VIII. Two types of tandem repeats (17 and 9 amino acids) have also been identified in the connecting region of factor V. The present data indicate that the amino acid sequence in the heavy and light chain regions of factor V is ∼ 40% identical with the corresponding regions of factor VIII

  4. Tactile detection of slip: surface microgeometry and peripheral neural codes.

    Science.gov (United States)

    Srinivasan, M A; Whitehouse, J M; LaMotte, R H

    1990-06-01

    1. The role of the microgeometry of planar surfaces in the detection of sliding of the surfaces on human and monkey fingerpads was investigated. By the use of a servo-controlled tactile stimulator to press and stroke glass plates on passive fingerpads of human subjects, the ability of humans to discriminate the direction of skin stretch caused by friction and to detect the sliding motion (slip) of the plates with or without micrometer-sized surface features was determined. To identify the associated peripheral neural codes, evoked responses to the same stimuli were recorded from single, low-threshold mechanoreceptive afferent fibers innervating the fingerpads of anesthetized macaque monkeys. 2. Humans could not detect the slip of a smooth glass plate on the fingerpad. However, the direction of skin stretch was perceived based on the information conveyed by the slowly adapting afferents that respond differentially to the stretch directions. Whereas the direction of skin stretch signaled the direction of impending slip, the perception of relative motion between the plate and the finger required the existence of detectable surface features. 3. Barely detectable micrometer-sized protrusions on smooth surfaces led to the detection of slip of these surfaces, because of the exclusive activation of rapidly adapting fibers of either the Meissner (RA) or the Pacinian (PC) type to specific geometries of the microfeatures. The motion of a smooth plate with a very small single raised dot (4 microns high, 550 microns diam) caused the sequential activation of neighboring RAs along the dot path, thus providing a reliable spatiotemporal code. The stroking of the plate with a fine homogeneous texture composed of a matrix of dots (1 microns high, 50 microns diam, and spaced at 100 microns center-to-center) induced vibrations in the fingerpad that activated only the PCs and resulted in an intensive code. 4. The results show that surprisingly small features on smooth surfaces are

  5. Development of Visual CINDER Code with Visual C⧣.NET

    International Nuclear Information System (INIS)

    Kim, Oyeon

    2016-01-01

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study

  6. Development of Visual CINDER Code with Visual C⧣.NET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Oyeon [Institute for Modeling and Simulation Convergence, Daegu (Korea, Republic of)

    2016-10-15

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study.

  7. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  8. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  9. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  10. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  11. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  12. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  13. Persepsi Crew dan Manajemen dalam Penerapan Ism Code Bagi Keselamatan Pelayaran dan Perlindungan Lingkungan Laut

    OpenAIRE

    Nurhasanah, Nina; Joni, Asmar; Shabrina, Nur

    2015-01-01

    About 80% shipwreck occurred by human error, human error is 75% due to the poor management system, it must bemade to the management system that is able to create a good and close cooperation between the management boardand land management. International Safety Management Code (ISM Code) is an International standard of safetymanagement regulations for the security and safety of the operation of the ship and the prevention of pollution of themarine environment established by the IMO Maritime Sa...

  14. Interpreting land records

    CERN Document Server

    Wilson, Donald A

    2014-01-01

    Base retracement on solid research and historically accurate interpretation Interpreting Land Records is the industry's most complete guide to researching and understanding the historical records germane to land surveying. Coverage includes boundary retracement and the primary considerations during new boundary establishment, as well as an introduction to historical records and guidance on effective research and interpretation. This new edition includes a new chapter titled "Researching Land Records," and advice on overcoming common research problems and insight into alternative resources wh

  15. Upbeat and quirky with a bit of a build: Interpretive repertories in creative music search

    OpenAIRE

    MacFarlane, A.; Inskip, C.; Rafferty, P.

    2010-01-01

    Pre-existing commercial music is widely used to accom-pany moving images in films, TV commercials and com-puter games. This process is known as music synchronisa-tion. Professionals are employed by rights holders and film makers to perform creative music searches on large catalogues to find appropriate pieces of music for syn-chronisation. This paper discusses a Discourse Analysis of thirty interview texts related to the process. Coded ex-amples are presented and discussed. Four interpretive ...

  16. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  17. The Interpretive Function

    DEFF Research Database (Denmark)

    Agerbo, Heidi

    2017-01-01

    Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant...... to incorporate into lexicographical theory, some scholars have since then assumed that this function exists(2), including the author of this contribution. In Agerbo (2016), I present arguments supporting the incorporation of the interpretive function into the function theory and suggest how non-linguistic signs...... can be treated in specific dictionary articles. However, in the current article, due to the results of recent research, I argue that the interpretive function should not be considered an individual main function. The interpretive function, contrary to some of its definitions, is not connected...

  18. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491

  19. Systemizers are better code-breakers:Self-reported systemizing predicts code-breaking performance in expert hackers and naïve participants

    Directory of Open Access Journals (Sweden)

    India eHarvey

    2016-05-01

    Full Text Available Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders - attention to detail and systemizing - may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e. crypto-analysis or code-breaking. A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001; Baron-Cohen et al., 2003. They were also tested with behavioural tasks involving code-breaking and a control task involving security x-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing was related with performance in the x-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015. We discuss the theoretical and translational implications of our findings.

  20. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants.

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders-attention to detail and systemizing-may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.

  1. 45 CFR Appendix B to Part 73 - Code of Ethics for Government Service

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Code of Ethics for Government Service B Appendix B to Part 73 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION STANDARDS OF CONDUCT Pt. 73, App. B Appendix B to Part 73—Code of Ethics for Government Service Any person in...

  2. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  3. Impact of the Primary Care Exception on Family Medicine Resident Coding.

    Science.gov (United States)

    Cawse-Lucas, Jeanne; Evans, David V; Ruiz, David R; Allcut, Elizabeth A; Andrilla, C Holly A; Thompson, Matthew; Norris, Thomas E

    2016-03-01

    The Medicare Primary Care Exception (PCE) allows residents to see and bill for less-complex patients independently in the primary care setting, requiring attending physicians only to see patients for higher-level visits and complete physical exams in order to bill for them as such. Primary care residencies apply the PCE in various ways. We investigated the impact of the PCE on resident coding practices. Family medicine residency directors in a five-state region completed a survey regarding interpretation and application of the PCE, including the number of established patient evaluation and management codes entered by residents and attending faculty at their institution. The percentage of high-level codes was compared between residencies using chi-square tests. We analyzed coding data for 125,016 visits from 337 residents and 172 faculty physicians in 15 of 18 eligible family medicine residencies. Among programs applying the PCE criteria to all patients, residents billed 86.7% low-mid complexity and 13.3% high-complexity visits. In programs that only applied the PCE to Medicare patients, residents billed 74.9% low-mid complexity visits and 25.2% high-complexity visits. Attending physicians coded more high-complexity visits at both types of programs. The estimated revenue loss over the 1,650 RRC-required outpatient visits was $2,558.66 per resident and $57,569.85 per year for the average residency in our sample. Residents at family medicine programs that apply the PCE to all patients bill significantly fewer high-complexity visits. This finding leads to compliance and regulatory concerns and suggests significant revenue loss. Further study is required to determine whether this discrepancy also reflects inaccuracy in coding.

  4. Development of An Automatic Verification Program for Thermal-hydraulic System Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. Y.; Ahn, K. T.; Ko, S. H.; Kim, Y. S.; Kim, D. W. [Pusan National University, Busan (Korea, Republic of); Suh, J. S.; Cho, Y. S.; Jeong, J. J. [System Engineering and Technology Co., Daejeon (Korea, Republic of)

    2012-05-15

    As a project activity of the capstone design competitive exhibition, supported by the Education Center for Green Industry-friendly Fusion Technology (GIFT), we have developed a computer program which can automatically perform non-regression test, which is needed repeatedly during a developmental process of a thermal-hydraulic system code, such as the SPACE code. A non-regression test (NRT) is an approach to software testing. The purpose of the non-regression testing is to verify whether, after updating a given software application (in this case, the code), previous software functions have not been compromised. The goal is to prevent software regression, whereby adding new features results in software bugs. As the NRT is performed repeatedly, a lot of time and human resources will be needed during the development period of a code. It may cause development period delay. To reduce the cost and the human resources and to prevent wasting time, non-regression tests need to be automatized. As a tool to develop an automatic verification program, we have used Visual Basic for Application (VBA). VBA is an implementation of Microsoft's event-driven programming language Visual Basic 6 and its associated integrated development environment, which are built into most Microsoft Office applications (In this case, Excel)

  5. Development of An Automatic Verification Program for Thermal-hydraulic System Codes

    International Nuclear Information System (INIS)

    Lee, J. Y.; Ahn, K. T.; Ko, S. H.; Kim, Y. S.; Kim, D. W.; Suh, J. S.; Cho, Y. S.; Jeong, J. J.

    2012-01-01

    As a project activity of the capstone design competitive exhibition, supported by the Education Center for Green Industry-friendly Fusion Technology (GIFT), we have developed a computer program which can automatically perform non-regression test, which is needed repeatedly during a developmental process of a thermal-hydraulic system code, such as the SPACE code. A non-regression test (NRT) is an approach to software testing. The purpose of the non-regression testing is to verify whether, after updating a given software application (in this case, the code), previous software functions have not been compromised. The goal is to prevent software regression, whereby adding new features results in software bugs. As the NRT is performed repeatedly, a lot of time and human resources will be needed during the development period of a code. It may cause development period delay. To reduce the cost and the human resources and to prevent wasting time, non-regression tests need to be automatized. As a tool to develop an automatic verification program, we have used Visual Basic for Application (VBA). VBA is an implementation of Microsoft's event-driven programming language Visual Basic 6 and its associated integrated development environment, which are built into most Microsoft Office applications (In this case, Excel)

  6. Nursing students and teaching of codes of ethics: an empirical research study.

    Science.gov (United States)

    Numminen, O H; Leino-Kilpi, H; van der Arend, A; Katajisto, J

    2009-12-01

    To explore graduating nursing students' perception of nurse educators' teaching of codes of ethics in polytechnics providing basic nursing education in Finland. Codes of ethics are regarded as an essential content in most nursing ethics curricula. However, little is known about how their teaching is implemented. Descriptive, cross-sectional design was used in this study. A total of 214 nursing students responded to a structured questionnaire with one open-ended question. The data was analysed statistically by SPSS and content analysis. Students perceived teaching of the codes as fairly extensive. The emphasis was on the nurse-patient relationship. Less attention was paid to nursing in wider social contexts. Educators' use of teaching and evaluation methods was narrow. Students whose teaching had been integrated into clinical training perceived that teaching had been more extensive. However, students did not perceive integration to clinical training as a much used teaching format. Students assessed their own knowledge and ability to apply the codes as mediocre. Those educators, whose knowledge about the codes students had assessed as adequate, were also perceived to teach the codes more extensively. Regardless of the responding students' positive description of the teaching, the findings should be interpreted with caution, due to the students' limited interest to respond. In teaching ethics, particular attention should be paid to more versatile use of teaching and evaluation methods, organization of integrated teaching, educators' competence in ethics, and student outcomes so that the importance of ethics would come across to all nursing students.

  7. Taurine‑upregulated gene 1: A vital long non‑coding RNA associated with cancer in humans (Review).

    Science.gov (United States)

    Wang, Wen-Yu; Wang, Yan-Fen; Ma, Pei; Xu, Tong-Peng; Shu, Yong-Qian

    2017-11-01

    It is widely reported that long non‑coding RNAs (lncRNAs) are involved in regulating cell differentiation, proliferation, apoptosis and other biological processes. Certain lncRNAs have been found to be crucial in various types of tumor. Taurine‑upregulated gene 1 (TUG1) has been shown to be expressed in a tissue‑specific pattern and exert oncogenic or tumor suppressive functions in different types of cancer in humans. According to previous studies, TUG1 is predominantly located in the nucleus and may regulate gene expression at the transcriptional level. It mediates chromosomal remodeling and coordinates with polycomb repressive complex 2 (PRC2) to regulate gene expression. Although the mechanisms of how TUG1 affects the tumor genesis process remain to be fully elucidated, increasing studies have suggested that TUG1 offers potential as a diagnostic and prognostic biomarker, and as a therapeutic target in certain types of tumor. This review aims to summarize current evidence concerning the characteristics, mechanisms and associations with cancer of TUG1.

  8. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  9. Quantum chemistry interpretation of x-ray spectra

    International Nuclear Information System (INIS)

    Bocharov, D.; Kuzmin, A.

    2005-01-01

    Full text: In this study, we present the results of ab initio computer simulations on X-ray absorption spectra of some perovskite-type tungsten oxides. For our calculations, we combine a periodic bulk model with Hartree-Fock method including a posteriori electron correlation corrections, as implemented in CRYSTAL98 computer code [1]. First, we describe the results of calculations performed on some bulk properties of the tungsten oxides (lattice constant and bulk modulus), in order to achieve the necessary level of agreement with available experimental and theoretical data. Our calculations on the densities of oneelectron energy states (DOS) for models of tungsten oxides are verified by experimental X-ray absorption spectra, which allow us to make correct qualitative interpretation of the latter. The main difficulties appearing during our first principles simulations will be discussed. [1] V.R. Saunders, R. Dovesi, C. Roetti, M. Causi, N.M. Harrison, R. Orlando, and C.M. Zicovich-Wilson, CRYSTAL-98 User Manual, University of Turin, 1999

  10. The State Council's decision interpreting the provisions of the mining code about mine shutdowns

    International Nuclear Information System (INIS)

    Couderc, G.; Sanvee, S.

    2004-01-01

    In line with the special police powers granted by article 77 of the Mining Code, administrative authorities may issue orders to a mine operator to undertake measures for ensuring public safety and security and for reinforcing the solidity of public and private buildings. When major risks to the security of goods and persons crop up following warrants for executing such orders, administrative authorities can step in once again and either take, till the expiration of mining rights, new measures, or else order, when risks of cave-ins have been identified, an operator to set up and run the equipment necessary for supervising and preventing these risks until these duties are transferred to the state. However a decision by the State Council on 22 October 2003 formulates a reservation: if the administration has not used the procedure for definitively shutting down a mine to identify all risks and to order the operator the measures for ensuring security in line with all known risks, then the administration itself will carry the responsibility for implementing such measures. (authors)

  11. An expanding universe of the non-coding genome in cancer biology.

    Science.gov (United States)

    Xue, Bin; He, Lin

    2014-06-01

    Neoplastic transformation is caused by accumulation of genetic and epigenetic alterations that ultimately convert normal cells into tumor cells with uncontrolled proliferation and survival, unlimited replicative potential and invasive growth [Hanahan,D. et al. (2011) Hallmarks of cancer: the next generation. Cell, 144, 646-674]. Although the majority of the cancer studies have focused on the functions of protein-coding genes, emerging evidence has started to reveal the importance of the vast non-coding genome, which constitutes more than 98% of the human genome. A number of non-coding RNAs (ncRNAs) derived from the 'dark matter' of the human genome exhibit cancer-specific differential expression and/or genomic alterations, and it is increasingly clear that ncRNAs, including small ncRNAs and long ncRNAs (lncRNAs), play an important role in cancer development by regulating protein-coding gene expression through diverse mechanisms. In addition to ncRNAs, nearly half of the mammalian genomes consist of transposable elements, particularly retrotransposons. Once depicted as selfish genomic parasites that propagate at the expense of host fitness, retrotransposon elements could also confer regulatory complexity to the host genomes during development and disease. Reactivation of retrotransposons in cancer, while capable of causing insertional mutagenesis and genome rearrangements to promote oncogenesis, could also alter host gene expression networks to favor tumor development. Taken together, the functional significance of non-coding genome in tumorigenesis has been previously underestimated, and diverse transcripts derived from the non-coding genome could act as integral functional components of the oncogene and tumor suppressor network. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A mathematical model for interpretable clinical decision support with applications in gynecology.

    Directory of Open Access Journals (Sweden)

    Vanya M C A Van Belle

    Full Text Available Over time, methods for the development of clinical decision support (CDS systems have evolved from interpretable and easy-to-use scoring systems to very complex and non-interpretable mathematical models. In order to accomplish effective decision support, CDS systems should provide information on how the model arrives at a certain decision. To address the issue of incompatibility between performance, interpretability and applicability of CDS systems, this paper proposes an innovative model structure, automatically leading to interpretable and easily applicable models. The resulting models can be used to guide clinicians when deciding upon the appropriate treatment, estimating patient-specific risks and to improve communication with patients.We propose the interval coded scoring (ICS system, which imposes that the effect of each variable on the estimated risk is constant within consecutive intervals. The number and position of the intervals are automatically obtained by solving an optimization problem, which additionally performs variable selection. The resulting model can be visualised by means of appealing scoring tables and color bars. ICS models can be used within software packages, in smartphone applications, or on paper, which is particularly useful for bedside medicine and home-monitoring. The ICS approach is illustrated on two gynecological problems: diagnosis of malignancy of ovarian tumors using a dataset containing 3,511 patients, and prediction of first trimester viability of pregnancies using a dataset of 1,435 women. Comparison of the performance of the ICS approach with a range of prediction models proposed in the literature illustrates the ability of ICS to combine optimal performance with the interpretability of simple scoring systems.The ICS approach can improve patient-clinician communication and will provide additional insights in the importance and influence of available variables. Future challenges include extensions of the

  13. Interpreter-mediated dentistry.

    Science.gov (United States)

    Bridges, Susan; Drew, Paul; Zayts, Olga; McGrath, Colman; Yiu, Cynthia K Y; Wong, H M; Au, T K F

    2015-05-01

    The global movements of healthcare professionals and patient populations have increased the complexities of medical interactions at the point of service. This study examines interpreter mediated talk in cross-cultural general dentistry in Hong Kong where assisting para-professionals, in this case bilingual or multilingual Dental Surgery Assistants (DSAs), perform the dual capabilities of clinical assistant and interpreter. An initial language use survey was conducted with Polyclinic DSAs (n = 41) using a logbook approach to provide self-report data on language use in clinics. Frequencies of mean scores using a 10-point visual analogue scale (VAS) indicated that the majority of DSAs spoke mainly Cantonese in clinics and interpreted for postgraduates and professors. Conversation Analysis (CA) examined recipient design across a corpus (n = 23) of video-recorded review consultations between non-Cantonese speaking expatriate dentists and their Cantonese L1 patients. Three patterns of mediated interpreting indicated were: dentist designated expansions; dentist initiated interpretations; and assistant initiated interpretations to both the dentist and patient. The third, rather than being perceived as negative, was found to be framed either in response to patient difficulties or within the specific task routines of general dentistry. The findings illustrate trends in dentistry towards personalized care and patient empowerment as a reaction to product delivery approaches to patient management. Implications are indicated for both treatment adherence and the education of dental professionals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. [Learning virtual routes: what does verbal coding do in working memory?].

    Science.gov (United States)

    Gyselinck, Valérie; Grison, Élise; Gras, Doriane

    2015-03-01

    Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  15. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  16. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  17. Lineament interpretation. Short review and methodology

    Energy Technology Data Exchange (ETDEWEB)

    Tiren, Sven (GEOSIGMA AB (Sweden))

    2010-11-15

    interpretation, and the skill of the interpreter. Images and digital terrain models that display the relief of the studied area should, if possible, be illuminated in at least four directions to reduce biases regarding the orientation of structures. The resolution in the source data should be fully used and extrapolation of structures avoided in the primary interpretation of the source data. The interpretation of lineaments should be made in steps: a. Interpretation of each data set/image/terrain model is conducted separately; b. Compilation of all interpretations in a base lineament map and classification of the lineaments; and c. Construction of thematical maps, e.g. structural maps, rock block maps, and statistic presentation of lineaments. Generalisations and extrapolations of lineaments/structures may be made when producing the thematical maps. The construction of thematical maps should be supported by auxiliary information (geological and geomorphologic data and information on human impact in the area). Inferred tectonic structures should be controlled in field

  18. Lineament interpretation. Short review and methodology

    International Nuclear Information System (INIS)

    Tiren, Sven

    2010-11-01

    the interpreter. Images and digital terrain models that display the relief of the studied area should, if possible, be illuminated in at least four directions to reduce biases regarding the orientation of structures. The resolution in the source data should be fully used and extrapolation of structures avoided in the primary interpretation of the source data. The interpretation of lineaments should be made in steps: a. Interpretation of each data set/image/terrain model is conducted separately; b. Compilation of all interpretations in a base lineament map and classification of the lineaments; and c. Construction of thematical maps, e.g. structural maps, rock block maps, and statistic presentation of lineaments. Generalisations and extrapolations of lineaments/structures may be made when producing the thematical maps. The construction of thematical maps should be supported by auxiliary information (geological and geomorphologic data and information on human impact in the area). Inferred tectonic structures should be controlled in field

  19. European Equivalencies in Legal Interpreting and Translation

    DEFF Research Database (Denmark)

    Corsellis, Ann; Hertog, Erik; Martinsen, Bodil

    2002-01-01

    Within Europe there is increasing freedom of movement between countries and increasing inward migration. As a result, equivalent standards of legl interpreting and translation are required to allow reliable communication for judicial cooperation between member states, for criminal and civil matters...... which cross national borders and for the needs of multilingual populations. The European Convention of Human Rights (article 6, paragrph 3) is one of the main planks of relevant legislation. This international, two year project has been funded by the EU Grotius programme to set out what is required...

  20. Statistics and Data Interpretation for Social Work

    CERN Document Server

    Rosenthal, James

    2011-01-01

    "Without question, this text will be the most authoritative source of information on statistics in the human services. From my point of view, it is a definitive work that combines a rigorous pedagogy with a down to earth (commonsense) exploration of the complex and difficult issues in data analysis (statistics) and interpretation. I welcome its publication.". -Praise for the First Edition. Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes

  1. A Comparative Study on Diagnostic Accuracy of Colour Coded Digital Images, Direct Digital Images and Conventional Radiographs for Periapical Lesions – An In Vitro Study

    Science.gov (United States)

    Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir

    2014-01-01

    Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318

  2. HUMAN COMMUNICATION AS MEDIATING THE UNITS OF PARAMETERISED ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Josip Stepanic

    2004-06-01

    Full Text Available Human communication is prevalently a mediated process. Mediators are units of environment, which are attributed functions within the local value set. They are utilised in such a way as to optimise the change of human states. In this article, a mediator-centred interpretation of the human communication is given. The interpretation follows closely the concept of mediated interaction developed within physics. It is conjectured that collection of mediators, which the humans use, has a well-defined average. The averaged collection permits reliable interpretation as a human communication spectrum. Relation of the intensity of a spectral component with regard to different senses, and with regard to intensity of interaction is discussed.

  3. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  4. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  5. Locating protein-coding sequences under selection for additional, overlapping functions in 29 mammalian genomes

    DEFF Research Database (Denmark)

    Lin, Michael F; Kheradpour, Pouya; Washietl, Stefan

    2011-01-01

    conservation compared to typical protein-coding genes—especially at synonymous sites. In this study, we use genome alignments of 29 placental mammals to systematically locate short regions within human ORFs that show conspicuously low estimated rates of synonymous substitution across these species. The 29......-species alignment provides statistical power to locate more than 10,000 such regions with resolution down to nine-codon windows, which are found within more than a quarter of all human protein-coding genes and contain ~2% of their synonymous sites. We collect numerous lines of evidence that the observed...... synonymous constraint in these regions reflects selection on overlapping functional elements including splicing regulatory elements, dual-coding genes, RNA secondary structures, microRNA target sites, and developmental enhancers. Our results show that overlapping functional elements are common in mammalian...

  6. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  7. cDNA sequence of human transforming gene hst and identification of the coding sequence required for transforming activity

    International Nuclear Information System (INIS)

    Taira, M.; Yoshida, T.; Miyagawa, K.; Sakamoto, H.; Terada, M.; Sugimura, T.

    1987-01-01

    The hst gene was originally identified as a transforming gene in DNAs from human stomach cancers and from a noncancerous portion of stomach mucosa by DNA-mediated transfection assay using NIH3T3 cells. cDNA clones of hst were isolated from the cDNA library constructed from poly(A) + RNA of a secondary transformant induced by the DNA from a stomach cancer. The sequence analysis of the hst cDNA revealed the presence of two open reading frames. When this cDNA was inserted into an expression vector containing the simian virus 40 promoter, it efficiently induced the transformation of NIH3T3 cells upon transfection. It was found that one of the reading frames, which coded for 206 amino acids, was responsible for the transforming activity

  8. Coding Variation in ANGPTL4, LPL, and SVEP1 and the Risk of Coronary Disease

    NARCIS (Netherlands)

    Stitziel, Nathan O; Stirrups, Kathleen E; Masca, Nicholas G D; Erdmann, Jeanette; Ferrario, Paola G; König, Inke R; Weeke, Peter E; Webb, Thomas R; Auer, Paul L; Schick, Ursula M; Lu, Yingchang; Zhang, He; Dube, Marie-Pierre; Goel, Anuj; Farrall, Martin; Peloso, Gina M; Won, Hong-Hee; Do, Ron; van Iperen, Erik; Kanoni, Stavroula; Kruppa, Jochen; Mahajan, Anubha; Scott, Robert A; Willenberg, Christina; Braund, Peter S; van Capelleveen, Julian C; Doney, Alex S F; Donnelly, Louise A; Asselta, Rosanna; Merlini, Piera A; Duga, Stefano; Marziliano, Nicola; Denny, Josh C; Shaffer, Christian M; El-Mokhtari, Nour Eddine; Franke, Andre; Gottesman, Omri; Heilmann, Stefanie; Hengstenberg, Christian; Hoffman, Per; Holmen, Oddgeir L; Hveem, Kristian; Jansson, Jan-Håkan; Jöckel, Karl-Heinz; Kessler, Thorsten; Kriebel, Jennifer; Laugwitz, Karl L; Marouli, Eirini; Martinelli, Nicola; McCarthy, Mark I; Van Zuydam, Natalie R; Meisinger, Christa; Esko, Tõnu; Mihailov, Evelin; Escher, Stefan A; Alvar, Maris; Moebus, Susanne; Morris, Andrew D; Müller-Nurasyid, Martina; Nikpay, Majid; Olivieri, Oliviero; Lemieux Perreault, Louis-Philippe; AlQarawi, Alaa; Robertson, Neil R; Akinsanya, Karen O; Reilly, Dermot F; Vogt, Thomas F; Yin, Wu; Asselbergs, Folkert W; Kooperberg, Charles; Jackson, Rebecca D; Stahl, Eli; Strauch, Konstantin; Varga, Tibor V; Waldenberger, Melanie; Zeng, Lingyao; Kraja, Aldi T; Liu, Chunyu; Ehret, George B; Newton-Cheh, Christopher; Chasman, Daniel I; Chowdhury, Rajiv; Ferrario, Marco; Ford, Ian; Jukema, J Wouter; Kee, Frank; Kuulasmaa, Kari; Nordestgaard, Børge G; Perola, Markus; Saleheen, Danish; Sattar, Naveed; Surendran, Praveen; Tregouet, David; Young, Robin; Howson, Joanna M M; Butterworth, Adam S; Danesh, John; Ardissino, Diego; Bottinger, Erwin P; Erbel, Raimund; Franks, Paul W; Girelli, Domenico; Hall, Alistair S; Hovingh, G Kees; Kastrati, Adnan; Lieb, Wolfgang; Meitinger, Thomas; Kraus, William E; Shah, Svati H; McPherson, Ruth; Orho-Melander, Marju; Melander, Olle; Metspalu, Andres; Palmer, Colin N A; Peters, Annette; Rader, Daniel; Reilly, Muredach P; Loos, Ruth J F; Reiner, Alex P; Roden, Dan M; Tardif, Jean-Claude; Thompson, John R; Wareham, Nicholas J; Watkins, Hugh; Willer, Cristen J; Kathiresan, Sekkar; Deloukas, Panos; Samani, Nilesh J; Schunkert, Heribert

    BACKGROUND: The discovery of low-frequency coding variants affecting the risk of coronary artery disease has facilitated the identification of therapeutic targets. METHODS: Through DNA genotyping, we tested 54,003 coding-sequence variants covering 13,715 human genes in up to 72,868 patients with

  9. Taming Human Genetic Variability: Transcriptomic Meta-Analysis Guides the Experimental Design and Interpretation of iPSC-Based Disease Modeling

    Directory of Open Access Journals (Sweden)

    Pierre-Luc Germain

    2017-06-01

    Full Text Available Both the promises and pitfalls of the cell reprogramming research platform rest on human genetic variation, making the measurement of its impact one of the most urgent issues in the field. Harnessing large transcriptomics datasets of induced pluripotent stem cells (iPSC, we investigate the implications of this variability for iPSC-based disease modeling. In particular, we show that the widespread use of more than one clone per individual in combination with current analytical practices is detrimental to the robustness of the findings. We then proceed to identify methods to address this challenge and leverage multiple clones per individual. Finally, we evaluate the specificity and sensitivity of different sample sizes and experimental designs, presenting computational tools for power analysis. These findings and tools reframe the nature of replicates used in disease modeling and provide important resources for the design, analysis, and interpretation of iPSC-based studies.

  10. Laser Capture and Deep Sequencing Reveals the Transcriptomic Programmes Regulating the Onset of Pancreas and Liver Differentiation in Human Embryos

    Directory of Open Access Journals (Sweden)

    Rachel E. Jennings

    2017-11-01

    Full Text Available To interrogate the alternative fates of pancreas and liver in the earliest stages of human organogenesis, we developed laser capture, RNA amplification, and computational analysis of deep sequencing. Pancreas-enriched gene expression was less conserved between human and mouse than for liver. The dorsal pancreatic bud was enriched for components of Notch, Wnt, BMP, and FGF signaling, almost all genes known to cause pancreatic agenesis or hypoplasia, and over 30 unexplored transcription factors. SOX9 and RORA were imputed as key regulators in pancreas compared with EP300, HNF4A, and FOXA family members in liver. Analyses implied that current in vitro human stem cell differentiation follows a dorsal rather than a ventral pancreatic program and pointed to additional factors for hepatic differentiation. In summary, we provide the transcriptional codes regulating the start of human liver and pancreas development to facilitate stem cell research and clinical interpretation without inter-species extrapolation.

  11. Development of Screening Tools for the Interpretation of Chemical Biomonitoring Data

    Directory of Open Access Journals (Sweden)

    Richard A. Becker

    2012-01-01

    Full Text Available Evaluation of a larger number of chemicals in commerce from the perspective of potential human health risk has become a focus of attention in North America and Europe. Screening-level chemical risk assessment evaluations consider both exposure and hazard. Exposures are increasingly being evaluated through biomonitoring studies in humans. Interpreting human biomonitoring results requires comparison to toxicity guidance values. However, conventional chemical-specific risk assessments result in identification of toxicity-based exposure guidance values such as tolerable daily intakes (TDIs as applied doses that cannot directly be used to evaluate exposure information provided by biomonitoring data in a health risk context. This paper describes a variety of approaches for development of screening-level exposure guidance values with translation from an external dose to a biomarker concentration framework for interpreting biomonitoring data in a risk context. Applications of tools and concepts including biomonitoring equivalents (BEs, the threshold of toxicologic concern (TTC, and generic toxicokinetic and physiologically based toxicokinetic models are described. These approaches employ varying levels of existing chemical-specific data, chemical class-specific assessments, and generic modeling tools in response to varying levels of available data in order to allow assessment and prioritization of chemical exposures for refined assessment in a risk management context.

  12. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  13. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  14. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  15. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  16. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  17. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    Science.gov (United States)

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  18. Legislative virus in the system of the Criminal Code of Ukraine: definition and actualization of the problem on the example of Article 368-2 of the Criminal Code of Ukraine

    Directory of Open Access Journals (Sweden)

    Василь Миколайович Киричко

    2016-06-01

    Full Text Available In this article was first justified need for a scientific use of a term «legislative virus in the system of the Criminal Code of Ukraine» and its definition is provided. This virus is proposed to understand a legislative requirement, which, after its inclusion in the Criminal Code of Ukraine, interacting with other elements of the system, determines the content of the criminal law legalized the possibility of arbitrary violation of human rights at its application in practice. As an example, the presence of the virus in the Article 368-2 of the Criminal Code of Ukraine «Illegal enrichment» was detail analyzed, shown history of the emergence of the virus and its impact on the determination of the content of this Article and how it causes human rights violations in the application of this Article in practice. The search of the viruses in the system of the Criminal Code of Ukraine is recognized the urgent task for the science of criminal law. Fighting these viruses is one of the areas of implementation of the rule of law principle in criminal law of Ukraine.

  19. Caring about dying persons and their families: Interpretation, practice and emotional labour.

    Science.gov (United States)

    Funk, Laura M; Peters, Sheryl; Roger, Kerstin Stieber

    2018-02-20

    The importance of emotional support for dying persons and their families has been well established, yet we know less about how care workers understand emotional processes related to death and dying, or how these understandings are connected to care practices and emotional labour at the end of life. The aim of this study was to explore how healthcare workers interpret and respond to emotional needs of dying persons and their families. Qualitative data were collected between 2013 and 2014 through in-depth, in-person interviews with 14 nurses and 12 healthcare aides in one Western Canadian city. Transcripts were analysed using an inductive, interpretive thematic coding approach and the analytic lens of emotional labour. Dominant interpretive frames of a "good death" informed participants' emotionally supportive practice. This included guiding patients and families to "open up" about their emotions to activate the grief process. There was concern that incomplete grieving would result in anger being directed towards care staff. The goal of promoting emotional sharing informed the work of "caring about." Although palliative philosophies opened up moral and professional space for "caring about" in the context of organisational norms which often discouraged these practices, the tension between the two, and the lack of time for this work, may encourage surface expressions rather than authentic emotional care. © 2018 John Wiley & Sons Ltd.

  20. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    International Nuclear Information System (INIS)

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA