WorldWideScience

Sample records for multiple point principler

  1. Coexistence of different vacua in the effective quantum field theory and multiple point principle

    International Nuclear Information System (INIS)

    Volovik, G.E.

    2004-01-01

    According to the multiple point principle our Universe in on the coexistence curve of two or more phases of the quantum vacuum. The coexistence of different quantum vacua can be regulated by the exchange of the global fermionic charges between the vacua. If the coexistence is regulated by the baryonic charge, all the coexisting vacua exhibit the baryonic asymmetry. Due to the exchange of the baryonic charge between the vacuum and matter which occurs above the electroweak transition, the baryonic asymmetry of the vacuum induces the baryonic asymmetry of matter in our Standard-Model phase of the quantum vacuum [ru

  2. PRINCIPLE OF POINT MAKING OFMUTUALLY ACCEPTABLE MULTIPROJECTION DECISION

    Directory of Open Access Journals (Sweden)

    Olga N. Lapaeva

    2015-01-01

    Full Text Available The principle of point making of mutually acceptable multi-projection decision in economics is set forth in the article. The principle envisages searching for the best variant by each stakeholder and result making by crossing of individual sets.

  3. 41 CFR Appendix A to Subpart C of... - 3-Key Points and Principles

    Science.gov (United States)

    2010-07-01

    ... Principles A Appendix A to Subpart C of Part 102 Public Contracts and Property Management Federal Property... 102-3—Key Points and Principles This appendix provides additional guidance in the form of answers to frequently asked questions and identifies key points and principles that may be applied to situations not...

  4. 41 CFR Appendix A to Subpart D of... - 3-Key Points and Principles

    Science.gov (United States)

    2010-07-01

    ... Principles A Appendix A to Subpart D of Part 102 Public Contracts and Property Management Federal Property... Subpart D of Part 102-3—Key Points and Principles This appendix provides additional guidance in the form of answers to frequently asked questions and identifies key points and principles that may be applied...

  5. Set Partitions and the Multiplication Principle

    Science.gov (United States)

    Lockwood, Elise; Caughman, John S., IV

    2016-01-01

    To further understand student thinking in the context of combinatorial enumeration, we examine student work on a problem involving set partitions. In this context, we note some key features of the multiplication principle that were often not attended to by students. We also share a productive way of thinking that emerged for several students who…

  6. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  7. Ten Anchor Points for Teaching Principles of Marketing

    Science.gov (United States)

    Tomkovick, Chuck

    2004-01-01

    Effective marketing instructors commonly share a love for their students, an affinity for the subject matter, and a devotion to continuous quality improvement. The purpose of this article is to highlight 10 anchor points for teaching Principles of Marketing, which are designed to better engage students in the learning process. These anchor…

  8. 41 CFR Appendix A to Subpart B of... - 3-Key Points and Principles

    Science.gov (United States)

    2010-07-01

    ... Principles A Appendix A to Subpart B of Part 102 Public Contracts and Property Management Federal Property.... B, App. A Appendix A to Subpart B of Part 102-3—Key Points and Principles This appendix provides... principles that may be applied to situations not covered elsewhere in this subpart. The guidance follows: Key...

  9. 41 CFR Appendix A to Subpart A of... - 3-Key Points and Principles

    Science.gov (United States)

    2010-07-01

    ... Principles A Appendix A to Subpart A of Part 102 Public Contracts and Property Management Federal Property..., Subpt. A, App. A Appendix A to Subpart A of Part 102-3—Key Points and Principles This appendix provides... principles that may be applied to situations not covered elsewhere in this subpart. The guidance follows: Key...

  10. Multiple point statistical simulation using uncertain (soft) conditional data

    Science.gov (United States)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  11. Turning challenges into design principles: Telemonitoring systems for patients with multiple chronic conditions.

    Science.gov (United States)

    Sultan, Mehwish; Kuluski, Kerry; McIsaac, Warren J; Cafazzo, Joseph A; Seto, Emily

    2018-01-01

    People with multiple chronic conditions often struggle with managing their health. The purpose of this research was to identify specific challenges of patients with multiple chronic conditions and to use the findings to form design principles for a telemonitoring system tailored for these patients. Semi-structured interviews with 15 patients with multiple chronic conditions and 10 clinicians were conducted to gain an understanding of their needs and preferences for a smartphone-based telemonitoring system. The interviews were analyzed using a conventional content analysis technique, resulting in six themes. Design principles developed from the themes included that the system must be modular to accommodate various combinations of conditions, reinforce a routine, consolidate record keeping, as well as provide actionable feedback to the patients. Designing an application for multiple chronic conditions is complex due to variability in patient conditions, and therefore, design principles developed in this study can help with future innovations aimed to help manage this population.

  12. Supporting Multiple Pointing Devices in Microsoft Windows

    DEFF Research Database (Denmark)

    Westergaard, Michael

    2002-01-01

    In this paper the implementation of a Microsoft Windows driver including APIs supporting multiple pointing devices is presented. Microsoft Windows does not natively support multiple pointing devices controlling independent cursors, and a number of solutions to this have been implemented by us and...... and others. Here we motivate and describe a general solution, and how user applications can use it by means of a framework. The device driver and the supporting APIs will be made available free of charge. Interested parties can contact the author for more information....

  13. History Matching Through a Smooth Formulation of Multiple-Point Statistics

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Zunino, Andrea; Lange, Katrine

    2014-01-01

    and the mismatch with multiple-point statistics. As a result, in the framework of the Bayesian approach, such a solution belongs to a high posterior region. The methodology, while applicable to any inverse problem with a training-image-based prior, is especially beneficial for problems which require expensive......We propose a smooth formulation of multiple-point statistics that enables us to solve inverse problems using gradient-based optimization techniques. We introduce a differentiable function that quantifies the mismatch between multiple-point statistics of a training image and of a given model. We...... show that, by minimizing this function, any continuous image can be gradually transformed into an image that honors the multiple-point statistics of the discrete training image. The solution to an inverse problem is then found by minimizing the sum of two mismatches: the mismatch with data...

  14. Multi-lane detection based on multiple vanishing points detection

    Science.gov (United States)

    Li, Chuanxiang; Nie, Yiming; Dai, Bin; Wu, Tao

    2015-03-01

    Lane detection plays a significant role in Advanced Driver Assistance Systems (ADAS) for intelligent vehicles. In this paper we present a multi-lane detection method based on multiple vanishing points detection. A new multi-lane model assumes that a single lane, which has two approximately parallel boundaries, may not parallel to others on road plane. Non-parallel lanes associate with different vanishing points. A biological plausibility model is used to detect multiple vanishing points and fit lane model. Experimental results show that the proposed method can detect both parallel lanes and non-parallel lanes.

  15. Evaluation of multiple emission point facilities

    International Nuclear Information System (INIS)

    Miltenberger, R.P.; Hull, A.P.; Strachan, S.; Tichler, J.

    1988-01-01

    In 1970, the New York State Department of Environmental Conservation (NYSDEC) assumed responsibility for the environmental aspect of the state's regulatory program for by-product, source, and special nuclear material. The major objective of this study was to provide consultation to NYSDEC and the US NRC to assist NYSDEC in determining if broad-based licensed facilities with multiple emission points were in compliance with NYCRR Part 380. Under this contract, BNL would evaluate a multiple emission point facility, identified by NYSDEC, as a case study. The review would be a nonbinding evaluation of the facility to determine likely dispersion characteristics, compliance with specified release limits, and implementation of the ALARA philosophy regarding effluent release practices. From the data collected, guidance as to areas of future investigation and the impact of new federal regulations were to be developed. Reported here is the case study for the University of Rochester, Strong Memorial Medical Center and Riverside Campus

  16. Le Chatelier's principle with multiple relaxation channels

    Science.gov (United States)

    Gilmore, R.; Levine, R. D.

    1986-05-01

    Le Chatelier's principle is discussed within the constrained variational approach to thermodynamics. The formulation is general enough to encompass systems not in thermal (or chemical) equilibrium. Particular attention is given to systems with multiple constraints which can be relaxed. The moderation of the initial perturbation increases as additional constraints are removed. This result is studied in particular when the (coupled) relaxation channels have widely different time scales. A series of inequalities is derived which describes the successive moderation as each successive relaxation channel opens up. These inequalities are interpreted within the metric-geometry representation of thermodynamics.

  17. First-principles study of point-defect production in Si and SiC

    International Nuclear Information System (INIS)

    Windl, W.; Lenosky, T.J.; Kress, J.D.; Voter, A.F.

    1998-03-01

    The authors have calculated the displacement-threshold energy E(d) for point-defect production in Si and SiC using empirical potentials, tight-binding, and first-principles methods. They show that -- depending on the knock-on direction -- 64-atom simulation cells can be sufficient to allow a nearly finite-size-effect-free calculation, thus making the use of first-principles methods possible. They use molecular dynamics (MD) techniques and propose the use of a sudden approximation which agrees reasonably well with the MD results for selected directions and which allows estimates of Ed without employing an MD simulation and the use of computationally demanding first-principles methods. Comparing the results with experiment, the authors find the full self-consistent first-principles method in conjunction with the sudden approximation to be a reliable and easy method to predict E d . Furthermore, they have examined the temperature dependence of E d for C in SiC and found it to be negligible

  18. A MOSUM procedure for the estimation of multiple random change points

    OpenAIRE

    Eichinger, Birte; Kirch, Claudia

    2018-01-01

    In this work, we investigate statistical properties of change point estimators based on moving sum statistics. We extend results for testing in a classical situation with multiple deterministic change points by allowing for random exogenous change points that arise in Hidden Markov or regime switching models among others. To this end, we consider a multiple mean change model with possible time series errors and prove that the number and location of change points are estimated consistently by ...

  19. Tripled Fixed Point in Ordered Multiplicative Metric Spaces

    Directory of Open Access Journals (Sweden)

    Laishram Shanjit

    2017-06-01

    Full Text Available In this paper, we present some triple fixed point theorems in partially ordered multiplicative metric spaces depended on another function. Our results generalise the results of [6] and [5].

  20. Insight into point defects and impurities in titanium from first principles

    Science.gov (United States)

    Nayak, Sanjeev K.; Hung, Cain J.; Sharma, Vinit; Alpay, S. Pamir; Dongare, Avinash M.; Brindley, William J.; Hebert, Rainer J.

    2018-03-01

    Titanium alloys find extensive use in the aerospace and biomedical industries due to a unique combination of strength, density, and corrosion resistance. Decades of mostly experimental research has led to a large body of knowledge of the processing-microstructure-properties linkages. But much of the existing understanding of point defects that play a significant role in the mechanical properties of titanium is based on semi-empirical rules. In this work, we present the results of a detailed self-consistent first-principles study that was developed to determine formation energies of intrinsic point defects including vacancies, self-interstitials, and extrinsic point defects, such as, interstitial and substitutional impurities/dopants. We find that most elements, regardless of size, prefer substitutional positions, but highly electronegative elements, such as C, N, O, F, S, and Cl, some of which are common impurities in Ti, occupy interstitial positions.

  1. Reduction of bias in neutron multiplicity assay using a weighted point model

    Energy Technology Data Exchange (ETDEWEB)

    Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)

    2004-01-01

    Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the

  2. Regulatory issues with multiplicity in drug approval: Principles and controversies in a changing landscape.

    Science.gov (United States)

    Benda, Norbert; Brandt, Andreas

    2018-01-01

    Recently, new draft guidelines on multiplicity issues in clinical trials have been issued by European Medicine Agency (EMA) and Food and Drug Administration (FDA), respectively. Multiplicity is an issue in clinical trials, if the probability of a false-positive decision is increased by insufficiently accounting for testing multiple hypotheses. We outline the regulatory principles related to multiplicity issues in confirmatory clinical trials intended to support a marketing authorization application in the EU, describe the reasons for an increasing complexity regarding multiple hypotheses testing and discuss the specific multiplicity issues emerging within the regulatory context and being relevant for drug approval.

  3. An evolutionary reduction principle for mutation rates at multiple Loci.

    Science.gov (United States)

    Altenberg, Lee

    2011-06-01

    A model of mutation rate evolution for multiple loci under arbitrary selection is analyzed. Results are obtained using techniques from Karlin (Evolutionary Biology, vol. 14, pp. 61-204, 1982) that overcome the weak selection constraints needed for tractability in prior studies of multilocus event models.A multivariate form of the reduction principle is found: reduction results at individual loci combine topologically to produce a surface of mutation rate alterations that are neutral for a new modifier allele. New mutation rates survive if and only if they fall below this surface-a generalization of the hyperplane found by Zhivotovsky et al. (Proc. Natl. Acad. Sci. USA 91, 1079-1083, 1994) for a multilocus recombination modifier. Increases in mutation rates at some loci may evolve if compensated for by decreases at other loci. The strength of selection on the modifier scales in proportion to the number of germline cell divisions, and increases with the number of loci affected. Loci that do not make a difference to marginal fitnesses at equilibrium are not subject to the reduction principle, and under fine tuning of mutation rates would be expected to have higher mutation rates than loci in mutation-selection balance.Other results include the nonexistence of 'viability analogous, Hardy-Weinberg' modifier polymorphisms under multiplicative mutation, and the sufficiency of average transmission rates to encapsulate the effect of modifier polymorphisms on the transmission of loci under selection. A conjecture is offered regarding situations, like recombination in the presence of mutation, that exhibit departures from the reduction principle. Constraints for tractability are: tight linkage of all loci, initial fixation at the modifier locus, and mutation distributions comprising transition probabilities of reversible Markov chains.

  4. Point specificity in acupuncture

    Directory of Open Access Journals (Sweden)

    Choi Emma M

    2012-02-01

    Full Text Available Abstract The existence of point specificity in acupuncture is controversial, because many acupuncture studies using this principle to select control points have found that sham acupoints have similar effects to those of verum acupoints. Furthermore, the results of pain-related studies based on visual analogue scales have not supported the concept of point specificity. In contrast, hemodynamic, functional magnetic resonance imaging and neurophysiological studies evaluating the responses to stimulation of multiple points on the body surface have shown that point-specific actions are present. This review article focuses on clinical and laboratory studies supporting the existence of point specificity in acupuncture and also addresses studies that do not support this concept. Further research is needed to elucidate the point-specific actions of acupuncture.

  5. A large deviation principle in H\\"older norm for multiple fractional integrals

    OpenAIRE

    Sanz-Solé, Marta; Torrecilla-Tarantino, Iván

    2007-01-01

    For a fractional Brownian motion $B^H$ with Hurst parameter $H\\in]{1/4},{1/2}[\\cup]{1/2},1[$, multiple indefinite integrals on a simplex are constructed and the regularity of their sample paths are studied. Then, it is proved that the family of probability laws of the processes obtained by replacing $B^H$ by $\\epsilon^{{1/2}} B^H$ satisfies a large deviation principle in H\\"older norm. The definition of the multiple integrals relies upon a representation of the fractional Brownian motion in t...

  6. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    Directory of Open Access Journals (Sweden)

    Md Selim Hossain

    Full Text Available In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM, which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST. The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text] and Area × Time × Energy (ATE product of the proposed design are far better than the most significant studies found in the literature.

  7. Exact multiple scattering theory of two-nucleus collisions including the Pauli principle

    International Nuclear Information System (INIS)

    Gurvitz, S.A.

    1981-01-01

    Exact equations for two-nucleus scattering are derived in which the effects of the Pauli principle are fully included. Our method exploits a modified equation for the scattering of two identical nucleons, which is obtained at the beginning. Considering proton-nucleus scattering we found that the resulting amplitude has two components, one resembling a multiple scattering series for distinguishable particles, and the other a distorted (A-1) nucleon cluster exchange. For elastic pA scattering the multiple scattering amplitude is found in the form of an optical potential expansion. We show that the Kerman-McManus-Thaler theory of the optical potential could be easily modified to include the effects of antisymmetrization of the projectile with the target nucleons. Nucleus-nucleus scattering is studied first for distinguishable target and beam nucleus. Afterwards the Pauli principle is included, where only the case of deuteron-nucleus scattering is discussed in detail. The resulting amplitude has four components. Two of them correspond to modified multiple scattering expansions and the others are distorted (A-1)- and (A-2)- nucleon cluster exchange. The result for d-A scattering is extended to the general case of nucleus-nucleus scattering. The equations are simple to use and as such constitute an improvement over existing schemes

  8. Point defects in thorium nitride: A first-principles study

    Energy Technology Data Exchange (ETDEWEB)

    Pérez Daroca, D., E-mail: pdaroca@tandar.cnea.gov.ar [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas (Argentina); Llois, A.M. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas (Argentina); Mosca, H.O. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica (Argentina); Instituto de Tecnología Jorge A. Sabato, UNSAM-CNEA (Argentina)

    2016-11-15

    Thorium and its compounds (carbides and nitrides) are being investigated as possible materials to be used as nuclear fuels for Generation-IV reactors. As a first step in the research of these materials under irradiation, we study the formation energies and stability of point defects in thorium nitride by means of first-principles calculations within the framework of density functional theory. We focus on vacancies, interstitials, Frenkel pairs and Schottky defects. We found that N and Th vacancies have almost the same formation energy and that the most energetically favorable defects of all studied in this work are N interstitials. These kind of results for ThN, to the best authors' knowledge, have not been obtained previously, neither experimentally, nor theoretically.

  9. Point defects in thorium nitride: A first-principles study

    International Nuclear Information System (INIS)

    Pérez Daroca, D.; Llois, A.M.; Mosca, H.O.

    2016-01-01

    Thorium and its compounds (carbides and nitrides) are being investigated as possible materials to be used as nuclear fuels for Generation-IV reactors. As a first step in the research of these materials under irradiation, we study the formation energies and stability of point defects in thorium nitride by means of first-principles calculations within the framework of density functional theory. We focus on vacancies, interstitials, Frenkel pairs and Schottky defects. We found that N and Th vacancies have almost the same formation energy and that the most energetically favorable defects of all studied in this work are N interstitials. These kind of results for ThN, to the best authors' knowledge, have not been obtained previously, neither experimentally, nor theoretically.

  10. First-principles study of point defects in thorium carbide

    International Nuclear Information System (INIS)

    Pérez Daroca, D.; Jaroszewicz, S.; Llois, A.M.; Mosca, H.O.

    2014-01-01

    Thorium-based materials are currently being investigated in relation with their potential utilization in Generation-IV reactors as nuclear fuels. One of the most important issues to be studied is their behavior under irradiation. A first approach to this goal is the study of point defects. By means of first-principles calculations within the framework of density functional theory, we study the stability and formation energies of vacancies, interstitials and Frenkel pairs in thorium carbide. We find that C isolated vacancies are the most likely defects, while C interstitials are energetically favored as compared to Th ones. These kind of results for ThC, to the best authors’ knowledge, have not been obtained previously, neither experimentally, nor theoretically. For this reason, we compare with results on other compounds with the same NaCl-type structure

  11. First-principles study of point defects in thorium carbide

    Energy Technology Data Exchange (ETDEWEB)

    Pérez Daroca, D., E-mail: pdaroca@tandar.cnea.gov.ar [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica, Av. General Paz 1499, (1650) San Martin, Buenos Aires (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, (1033) Buenos Aires (Argentina); Jaroszewicz, S. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica, Av. General Paz 1499, (1650) San Martin, Buenos Aires (Argentina); Instituto de Tecnología Jorge A. Sabato, UNSAM-CNEA, Av. General Paz 1499, (1650) San Martin, Buenos Aires (Argentina); Llois, A.M. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica, Av. General Paz 1499, (1650) San Martin, Buenos Aires (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, (1033) Buenos Aires (Argentina); Mosca, H.O. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica, Av. General Paz 1499, (1650) San Martin, Buenos Aires (Argentina); Instituto de Tecnología Jorge A. Sabato, UNSAM-CNEA, Av. General Paz 1499, (1650) San Martin, Buenos Aires (Argentina)

    2014-11-15

    Thorium-based materials are currently being investigated in relation with their potential utilization in Generation-IV reactors as nuclear fuels. One of the most important issues to be studied is their behavior under irradiation. A first approach to this goal is the study of point defects. By means of first-principles calculations within the framework of density functional theory, we study the stability and formation energies of vacancies, interstitials and Frenkel pairs in thorium carbide. We find that C isolated vacancies are the most likely defects, while C interstitials are energetically favored as compared to Th ones. These kind of results for ThC, to the best authors’ knowledge, have not been obtained previously, neither experimentally, nor theoretically. For this reason, we compare with results on other compounds with the same NaCl-type structure.

  12. Point defect thermodynamics and diffusion in Fe3C: A first-principles study

    International Nuclear Information System (INIS)

    Chao Jiang; Uberuaga, B.P.; Srinivasan, S.G.

    2008-01-01

    The point defect structure of cementite (Fe 3 C) is investigated using a combination of the statistical mechanical Wagner-Schottky model and first-principles calculations within the generalized gradient approximation. Large 128-atom supercells are employed to obtain fully converged point defect formation energies. The present study unambiguously shows that carbon vacancies and octahedral carbon interstitials are the structural defects in C-depleted and C-rich cementite, respectively. The dominant thermal defects in C-depleted and stoichiometric cementite are found to be carbon Frenkel pairs. In C-rich cementite, however, the primary thermal excitations are strongly temperature-dependent: interbranch, Schottky and Frenkel defects dominate successively with increasing temperature. Using the nudged elastic band technique, the migration barriers of major point defects in cementite are also determined and compared with available experiments in the literature

  13. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  14. Teaching Structure-Property Relationships: Investigating Molecular Structure and Boiling Point

    Science.gov (United States)

    Murphy, Peter M.

    2007-01-01

    A concise, well-organized table of the boiling points of 392 organic compounds has facilitated inquiry-based instruction in multiple scientific principles. Many individual or group learning activities can be derived from the tabulated data of molecular structure and boiling point based on the instructor's education objectives and the students'…

  15. Operating principle of Soft Open Points for electrical distribution network operation

    International Nuclear Information System (INIS)

    Cao, Wanyu; Wu, Jianzhong; Jenkins, Nick; Wang, Chengshan; Green, Timothy

    2016-01-01

    Highlights: • Two control modes were developed for a B2B VSCs based SOP. • The SOP operating principle was investigated under various network conditions. • The performance of the SOP using two control modes was analyzed. - Abstract: Soft Open Points (SOPs) are power electronic devices installed in place of normally-open points in electrical power distribution networks. They are able to provide active power flow control, reactive power compensation and voltage regulation under normal network operating conditions, as well as fast fault isolation and supply restoration under abnormal conditions. Two control modes were developed for the operation of an SOP, using back-to-back voltage-source converters (VSCs). A power flow control mode with current control provides independent control of real and reactive power. A supply restoration mode with a voltage controller enables power supply to isolated loads due to network faults. The operating principle of the back-to-back VSCs based SOP was investigated under both normal and abnormal network operating conditions. Studies on a two-feeder medium-voltage distribution network showed the performance of the SOP under different network-operating conditions: normal, during a fault and post-fault supply restoration. During the change of network operating conditions, a mode switch method based on the phase locked loop controller was used to achieve the transitions between the two control modes. Hard transitions by a direct mode switching were noticed unfavourable, but seamless transitions were obtained by deploying a soft cold load pickup and voltage synchronization process.

  16. The traveltime holographic principle

    Science.gov (United States)

    Huang, Yunsong; Schuster, Gerard T.

    2015-01-01

    Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the `traveltime holographic principle', by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.

  17. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    Science.gov (United States)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  18. Multiplicity: discussion points from the Statisticians in the Pharmaceutical Industry multiplicity expert group.

    Science.gov (United States)

    Phillips, Alan; Fletcher, Chrissie; Atkinson, Gary; Channon, Eddie; Douiri, Abdel; Jaki, Thomas; Maca, Jeff; Morgan, David; Roger, James Henry; Terrill, Paul

    2013-01-01

    In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one-day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Common Fixed Points of Generalized Rational Type Cocyclic Mappings in Multiplicative Metric Spaces

    Directory of Open Access Journals (Sweden)

    Mujahid Abbas

    2015-01-01

    Full Text Available The aim of this paper is to present fixed point result of mappings satisfying a generalized rational contractive condition in the setup of multiplicative metric spaces. As an application, we obtain a common fixed point of a pair of weakly compatible mappings. Some common fixed point results of pair of rational contractive types mappings involved in cocyclic representation of a nonempty subset of a multiplicative metric space are also obtained. Some examples are presented to support the results proved herein. Our results generalize and extend various results in the existing literature.

  20. [Introduction of hazard analysis and critical control points (HACCP) principles at the flight catering food production plant].

    Science.gov (United States)

    Popova, A Yu; Trukhina, G M; Mikailova, O M

    In the article there is considered the quality control and safety system implemented in the one of the largest flight catering food production plant for airline passengers and flying squad. The system for the control was based on the Hazard Analysis And Critical Control Points (HACCP) principles and developed hygienic and antiepidemic measures. There is considered the identification of hazard factors at stages of the technical process. There are presented results of the analysis data of monitoring for 6 critical control points over the five-year period. The quality control and safety system permit to decline food contamination risk during acceptance, preparation and supplying of in-flight meal. There was proved the efficiency of the implemented system. There are determined further ways of harmonization and implementation for HACCP principles in the plant.

  1. Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2014-01-01

    Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...

  2. The positive impact of simultaneous implementation of the BD FocalPoint GS Imaging System and lean principles on the operation of gynecologic cytology.

    Science.gov (United States)

    Wong, Rebecca; Levi, Angelique W; Harigopal, Malini; Schofield, Kevin; Chhieng, David C

    2012-02-01

    Our cytology laboratory, like many others, is under pressure to improve quality and provide test results faster while decreasing costs. We sought to address these issues by introducing new technology and lean principles. To determine the combined impact of the FocalPoint Guided Screener (GS) Imaging System (BD Diagnostics-TriPath, Burlington, North Carolina) and lean manufacturing principles on the turnaround time (TAT) and productivity of the gynecologic cytology operation. We established a baseline measure of the TAT for Papanicolaou tests. We then compared that to the performance after implementing the FocalPoint GS Imaging System and lean principles. The latter included value-stream mapping, workflow modification, and a first in-first out policy. The mean (SD) TAT for Papanicolaou tests before and after the implementation of FocalPoint GS Imaging System and lean principles was 4.38 (1.28) days and 3.20 (1.32) days, respectively. This represented a 27% improvement in the average TAT, which was statistically significant (P implementation of FocalPoint GS Imaging System in conjunction with lean principles resulted in a significant decrease in the average TAT for Papanicolaou tests and a substantial increase in the productivity of cytotechnologists while maintaining the diagnostic quality of gynecologic cytology.

  3. The traveltime holographic principle

    KAUST Repository

    Huang, Y.; Schuster, Gerard T.

    2014-01-01

    Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.

  4. The traveltime holographic principle

    KAUST Repository

    Huang, Y.

    2014-11-06

    Fermat\\'s interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat\\'s interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region\\'s boundary.

  5. Fermat principles in general relativity and the existence of light rays on Lorentzian manifolds

    International Nuclear Information System (INIS)

    Fortunato, D.; Masiello, A.

    1995-01-01

    In this paper we review some results on the existence and multiplicity of null geodesics (light rays) joining a point with a timelike curve on a Lorentzian manifold. Moreover a Morse Theory for such geodesics is presented. A variational principle, which is a variant of the classical Fermat principle in optics, allows to characterize the null geodesics joining a point with a timelike curve as the critical points of a functional on an infinite dimensional manifold. Global variational methods are used to get the existence results and Morse Theory. Such results cover a class of Lorentzian manifolds including Schwarzschild, Reissner-Nordstroem and Kerr space-time. (author)

  6. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    Science.gov (United States)

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  7. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    Directory of Open Access Journals (Sweden)

    David Bednar

    2015-11-01

    Full Text Available There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  8. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  9. Mirrored pyramidal wells for simultaneous multiple vantage point microscopy.

    Science.gov (United States)

    Seale, K T; Reiserer, R S; Markov, D A; Ges, I A; Wright, C; Janetopoulos, C; Wikswo, J P

    2008-10-01

    We report a novel method for obtaining simultaneous images from multiple vantage points of a microscopic specimen using size-matched microscopic mirrors created from anisotropically etched silicon. The resulting pyramidal wells enable bright-field and fluorescent side-view images, and when combined with z-sectioning, provide additional information for 3D reconstructions of the specimen. We have demonstrated the 3D localization and tracking over time of the centrosome of a live Dictyostelium discoideum. The simultaneous acquisition of images from multiple perspectives also provides a five-fold increase in the theoretical collection efficiency of emitted photons, a property which may be useful for low-light imaging modalities such as bioluminescence, or low abundance surface-marker labelling.

  10. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  11. 75 FR 8239 - School Food Safety Program Based on Hazard Analysis and Critical Control Point Principles (HACCP...

    Science.gov (United States)

    2010-02-24

    ... (HACCP); Approval of Information Collection Request AGENCY: Food and Nutrition Service, USDA. ACTION... Safety Program Based on Hazard Analysis and Critical Control Point Principles (HACCP) was published on... must be based on the (HACCP) system established by the Secretary of Agriculture. The food safety...

  12. Departing from PowerPoint default mode: Applying Mayer's multimedia principles for enhanced learning of parasitology.

    Science.gov (United States)

    Nagmoti, Jyoti Mahantesh

    2017-01-01

    PowerPoint (PPT™) presentation has become an integral part of day-to-day teaching in medicine. Most often, PPT™ is used in its default mode which in fact, is known to cause boredom and ineffective learning. Research has shown improved short-term memory by applying multimedia principles for designing and delivering lectures. However, such evidence in medical education is scarce. Therefore, we attempted to evaluate the effect of multimedia principles on enhanced learning of parasitology. Second-year medical students received a series of lectures, half of the lectures used traditionally designed PPT™ and the rest used slides designed by Mayer's multimedia principles. Students answered pre and post-tests at the end of each lecture (test-I) and an essay test after six months (test-II) which assessed their short and long term knowledge retention respectively. Students' feedback on quality and content of lectures were collected. Statistically significant difference was found between post test scores of traditional and modified lectures (P = 0.019) indicating, improved short-term memory after modified lectures. Similarly, students scored better in test II on the contents learnt through modified lectures indicating, enhanced comprehension and improved long-term memory (P learning through multimedia designed PPT™ and suggested for their continued use. It is time to depart from default PPT™ and adopt multimedia principles to enhance comprehension and improve short and long term knowledge retention. Further, medical educators may be trained and encouraged to apply multimedia principles for designing and delivering effective lectures.

  13. Focal Points Revisited: Team Reasoning, the Principle of Insufficient Reason and Cognitive Hierarchy Theory

    NARCIS (Netherlands)

    Bardsley, N.; Ule, A.

    It is well-established that people can coordinate their behaviour on focal points in games with multiple equilibria, but it is not firmly established how. Much coordination game data might be explained by team reasoning, a departure from individualistic choice theory. However, a less exotic

  14. Determination of shell correction energies at saddle point using pre-scission neutron multiplicities

    International Nuclear Information System (INIS)

    Golda, K.S.; Saxena, A.; Mittal, V.K.; Mahata, K.; Sugathan, P.; Jhingan, A.; Singh, V.; Sandal, R.; Goyal, S.; Gehlot, J.; Dhal, A.; Behera, B.R.; Bhowmik, R.K.; Kailas, S.

    2013-01-01

    Pre-scission neutron multiplicities have been measured for 12 C + 194, 198 Pt systems at matching excitation energies at near Coulomb barrier region. Statistical model analysis with a modified fission barrier and level density prescription have been carried out to fit the measured pre-scission neutron multiplicities and the available evaporation residue and fission cross sections simultaneously to constrain statistical model parameters. Simultaneous fitting of the pre-scission neutron multiplicities and cross section data requires shell correction at the saddle point

  15. Multi-valued logic gates based on ballistic transport in quantum point contacts.

    Science.gov (United States)

    Seo, M; Hong, C; Lee, S-Y; Choi, H K; Kim, N; Chung, Y; Umansky, V; Mahalu, D

    2014-01-22

    Multi-valued logic gates, which can handle quaternary numbers as inputs, are developed by exploiting the ballistic transport properties of quantum point contacts in series. The principle of a logic gate that finds the minimum of two quaternary number inputs is demonstrated. The device is scalable to allow multiple inputs, which makes it possible to find the minimum of multiple inputs in a single gate operation. Also, the principle of a half-adder for quaternary number inputs is demonstrated. First, an adder that adds up two quaternary numbers and outputs the sum of inputs is demonstrated. Second, a device to express the sum of the adder into two quaternary digits [Carry (first digit) and Sum (second digit)] is demonstrated. All the logic gates presented in this paper can in principle be extended to allow decimal number inputs with high quality QPCs.

  16. Multi-Valued Logic Gates based on Ballistic Transport in Quantum Point Contacts

    Science.gov (United States)

    Seo, M.; Hong, C.; Lee, S.-Y.; Choi, H. K.; Kim, N.; Chung, Y.; Umansky, V.; Mahalu, D.

    2014-01-01

    Multi-valued logic gates, which can handle quaternary numbers as inputs, are developed by exploiting the ballistic transport properties of quantum point contacts in series. The principle of a logic gate that finds the minimum of two quaternary number inputs is demonstrated. The device is scalable to allow multiple inputs, which makes it possible to find the minimum of multiple inputs in a single gate operation. Also, the principle of a half-adder for quaternary number inputs is demonstrated. First, an adder that adds up two quaternary numbers and outputs the sum of inputs is demonstrated. Second, a device to express the sum of the adder into two quaternary digits [Carry (first digit) and Sum (second digit)] is demonstrated. All the logic gates presented in this paper can in principle be extended to allow decimal number inputs with high quality QPCs.

  17. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  18. [Powdered infant formulae preparation guide for hospitals based on Hazard Analysis and Critical Control Points (HACCP) principles].

    Science.gov (United States)

    Vargas-Leguás, H; Rodríguez Garrido, V; Lorite Cuenca, R; Pérez-Portabella, C; Redecillas Ferreiro, S; Campins Martí, M

    2009-06-01

    This guide for the preparation of powdered infant formulae in hospital environments is a collaborative work between several hospital services and is based on national and European regulations, international experts meetings and the recommendations of scientific societies. This guide also uses the Hazard Analysis and Critical Control Point principles proposed by Codex Alimentarius and emphasises effective verifying measures, microbiological controls of the process and the corrective actions when monitoring indicates that a critical control point is not under control. It is a dynamic guide and specifies the evaluation procedures that allow it to be constantly adapted.

  19. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  20. First-Principles Prediction of Spin-Polarized Multiple Dirac Rings in Manganese Fluoride

    Science.gov (United States)

    Jiao, Yalong; Ma, Fengxian; Zhang, Chunmei; Bell, John; Sanvito, Stefano; Du, Aijun

    2017-07-01

    Spin-polarized materials with Dirac features have sparked great scientific interest due to their potential applications in spintronics. But such a type of structure is very rare and none has been fabricated. Here, we investigate the already experimentally synthesized manganese fluoride (MnF3 ) as a novel spin-polarized Dirac material by using first-principles calculations. MnF3 exhibits multiple Dirac cones in one spin orientation, while it behaves like a large gap semiconductor in the other spin channel. The estimated Fermi velocity for each cone is of the same order of magnitude as that in graphene. The 3D band structure further reveals that MnF3 possesses rings of Dirac nodes in the Brillouin zone. Such a spin-polarized multiple Dirac ring feature is reported for the first time in an experimentally realized material. Moreover, similar band dispersions can be also found in other transition metal fluorides (e.g., CoF3 , CrF3 , and FeF3 ). Our results highlight a new interesting single-spin Dirac material with promising applications in spintronics and information technologies.

  1. First-Principles Prediction of Spin-Polarized Multiple Dirac Rings in Manganese Fluoride.

    Science.gov (United States)

    Jiao, Yalong; Ma, Fengxian; Zhang, Chunmei; Bell, John; Sanvito, Stefano; Du, Aijun

    2017-07-07

    Spin-polarized materials with Dirac features have sparked great scientific interest due to their potential applications in spintronics. But such a type of structure is very rare and none has been fabricated. Here, we investigate the already experimentally synthesized manganese fluoride (MnF_{3}) as a novel spin-polarized Dirac material by using first-principles calculations. MnF_{3} exhibits multiple Dirac cones in one spin orientation, while it behaves like a large gap semiconductor in the other spin channel. The estimated Fermi velocity for each cone is of the same order of magnitude as that in graphene. The 3D band structure further reveals that MnF_{3} possesses rings of Dirac nodes in the Brillouin zone. Such a spin-polarized multiple Dirac ring feature is reported for the first time in an experimentally realized material. Moreover, similar band dispersions can be also found in other transition metal fluorides (e.g., CoF_{3}, CrF_{3}, and FeF_{3}). Our results highlight a new interesting single-spin Dirac material with promising applications in spintronics and information technologies.

  2. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating

  3. Development of a Whole-Body Haptic Sensor with Multiple Supporting Points and Its Application to a Manipulator

    Science.gov (United States)

    Hanyu, Ryosuke; Tsuji, Toshiaki

    This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.

  4. MULTIPLE ACCESS POINTS WITHIN THE ONLINE CLASSROOM: WHERE STUDENTS LOOK FOR INFORMATION

    Directory of Open Access Journals (Sweden)

    John STEELE

    2017-01-01

    Full Text Available The purpose of this study is to examine the impact of information placement within the confines of the online classroom architecture. Also reviewed was the impact of other variables such as course design, teaching presence and student patterns in looking for information. The sample population included students from a major online university in their first year course sequence. Students were tasked with completing a survey at the end of the course, indicating their preference for accessing information within the online classroom. The qualitative data indicated that student preference is to receive information from multiple access points and sources within the online classroom architecture. Students also expressed a desire to have information delivered through the usage of technology such as email and text messaging. In addition to receiving information from multiple sources, the qualitative data indicated students were satisfied overall, with the current ways in which they received and accessed information within the online classroom setting. Major findings suggest that instructors teaching within the online classroom should have multiple data access points within the classroom architecture. Furthermore, instructors should use a variety of communication venues to enhance the ability for students to access and receive information pertinent to the course.

  5. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  6. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    Science.gov (United States)

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  7. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Science.gov (United States)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  8. Architectural Principles for Orchestration of Cross-Organizational Service Delivery: Case Studies from the Netherlands

    Science.gov (United States)

    van Veenstra, Anne Fleur; Janssen, Marijn

    One of the main challenges for e-government is to create coherent services for citizens and businesses. Realizing Integrated Service Delivery (ISD) requires government agencies to collaborate across their organizational boundaries. The coordination of processes across multiple organizations to realize ISD is called orchestration. One way of achieving orchestration is to formalize processes using architecture. In this chapter we identify architectural principles for orchestration by looking at three case studies of cross-organizational service delivery chain formation in the Netherlands. In total, six generic principles were formulated and subsequently validated in two workshops with experts. These principles are: (i) build an intelligent front office, (ii) give processes a clear starting point and end, (iii) build a central workflow application keeping track of the process, (iv) differentiate between simple and complex processes, (v) ensure that the decision-making responsibility and the overview of the process are not performed by the same process role, and (vi) create a central point where risk profiles are maintained. Further research should focus on how organizations can adapt these principles to their own situation.

  9. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    Science.gov (United States)

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  10. The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE

    Directory of Open Access Journals (Sweden)

    Man Kam Kwong

    2006-06-01

    Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.

  11. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    Science.gov (United States)

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  12. Multiplicative point process as a model of trading activity

    Science.gov (United States)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  13. A min-max variational principle

    International Nuclear Information System (INIS)

    Georgiev, P.G.

    1995-11-01

    In this paper a variational principle for min-max problems is proved that is of the same spirit as Deville-Godefroy-Zizler's variational principle for minimization problems. A localization theorem in which the mini-max points for the perturbed function with respect top a given ε-min-max point are localized is presented. 3 refs

  14. Multiple contacts with diversion at the point of arrest.

    Science.gov (United States)

    Riordan, Sharon; Wix, Stuart; Haque, M Sayeed; Humphreys, Martin

    2003-04-01

    A diversion at the point of arrest (DAPA) scheme was set up in five police stations in South Birmingham in 1992. In a study of all referrals made over a four-year period a sub group of multiple contact individuals was identified. During that time four hundred and ninety-two contacts were recorded in total, of which 130 were made by 58 individuals. The latter group was generally no different from the single contact group but did have a tendency to be younger. This research highlights the need for a re-evaluation of service provision and associated education of police officers and relevant mental health care professionals.

  15. NEWTONIAN IMPERIALIST COMPETITVE APPROACH TO OPTIMIZING OBSERVATION OF MULTIPLE TARGET POINTS IN MULTISENSOR SURVEILLANCE SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Afghan-Toloee

    2013-09-01

    Full Text Available The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP. The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  16. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    International Nuclear Information System (INIS)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-01-01

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  17. Point defects in hexagonal germanium carbide monolayer: A first-principles calculation

    International Nuclear Information System (INIS)

    Ersan, Fatih; Gökçe, Aytaç Gürhan; Aktürk, Ethem

    2016-01-01

    Highlights: • Semiconductor GeC turns into metal by introducing a carbon vacancy. • Semiconductor GeC becomes half-metal by a single Ge vacancy. • Band gap value of GeC system can be tuned in the range of 0.308–1.738 eV by antisite or Stone–Wales defects. - Abstract: On the basis of first-principles plane-wave calculations, we investigated the electronic and magnetic properties of various point defects including single Ge and C vacancies, Ge + C divacancy, Ge↔C antisites and the Stone–Wales (SW) defects in a GeC monolayer. We found that various periodic vacancy defects in GeC single layer give rise to crucial effects on the electronic and magnetic properties. The band gaps of GeC monolayer vary significantly from 0.308 eV to 1.738 eV due to the presence of antisites and Stone–Wales defects. While nonmagnetic ground state of semiconducting GeC turns into metal by introducing a carbon vacancy, it becomes half-metal by a single Ge vacancy with high magnetization (4 μ_B) value per supercell. All the vacancy types have zero net magnetic moments, except single Ge vacancy.

  18. Point defects in hexagonal germanium carbide monolayer: A first-principles calculation

    Energy Technology Data Exchange (ETDEWEB)

    Ersan, Fatih [Department of Physics, Adnan Menderes University, 09100 Aydın (Turkey); Gökçe, Aytaç Gürhan [Department of Physics, Adnan Menderes University, 09100 Aydın (Turkey); Department of Physics, Dokuz Eylül University, 35160 İzmir (Turkey); Aktürk, Ethem, E-mail: ethem.akturk@adu.edu.tr [Department of Physics, Adnan Menderes University, 09100 Aydın (Turkey); Nanotechnology Application and Research Center, Adnan Menderes University, 09100 Aydın (Turkey)

    2016-12-15

    Highlights: • Semiconductor GeC turns into metal by introducing a carbon vacancy. • Semiconductor GeC becomes half-metal by a single Ge vacancy. • Band gap value of GeC system can be tuned in the range of 0.308–1.738 eV by antisite or Stone–Wales defects. - Abstract: On the basis of first-principles plane-wave calculations, we investigated the electronic and magnetic properties of various point defects including single Ge and C vacancies, Ge + C divacancy, Ge↔C antisites and the Stone–Wales (SW) defects in a GeC monolayer. We found that various periodic vacancy defects in GeC single layer give rise to crucial effects on the electronic and magnetic properties. The band gaps of GeC monolayer vary significantly from 0.308 eV to 1.738 eV due to the presence of antisites and Stone–Wales defects. While nonmagnetic ground state of semiconducting GeC turns into metal by introducing a carbon vacancy, it becomes half-metal by a single Ge vacancy with high magnetization (4 μ{sub B}) value per supercell. All the vacancy types have zero net magnetic moments, except single Ge vacancy.

  19. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    DEFF Research Database (Denmark)

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    (TPROGS) of alternating geological facies. The second method, multiple-point statistics, uses training images to estimate the conditional probability of sand-lenses at a certain location. Both methods respect field observations such as local stratigraphy, however, only the multiple-point statistics can...... of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities...

  20. Methods of fast, multiple-point in vivo T1 determination

    International Nuclear Information System (INIS)

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  1. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    Science.gov (United States)

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  2. Robust set-point regulation for ecological models with multiple management goals.

    Science.gov (United States)

    Guiver, Chris; Mueller, Markus; Hodgson, Dave; Townley, Stuart

    2016-05-01

    Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal.

  3. On the Bourbaki-Witt principle in toposes

    Science.gov (United States)

    Bauer, Andrej; Lumsdaine, Peter Lefanu

    2013-07-01

    The Bourbaki-Witt principle states that any progressive map on a chain-complete poset has a fixed point above every point. It is provable classically, but not intuitionistically. We study this and related principles in an intuitionistic setting. Among other things, we show that Bourbaki-Witt fails exactly when the trichotomous ordinals form a set, but does not imply that fixed points can always be found by transfinite iteration. Meanwhile, on the side of models, we see that the principle fails in realisability toposes, and does not hold in the free topos, but does hold in all cocomplete toposes.

  4. Gyro precession and Mach's principle

    International Nuclear Information System (INIS)

    Eby, P.

    1979-01-01

    The precession of a gyroscope is calculated in a nonrelativistic theory due to Barbour which satisfies Mach's principle. It is shown that the theory predicts both the geodetic and motional precession of general relativity to within factors of order 1. The significance of the gyro experiment is discussed from the point of view of metric theories of gravity and this is contrasted with its significance from the point of view of Mach's principle. (author)

  5. Derivation of the blackbody radiation spectrum from the equivalence principle in classical physics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1984-01-01

    A derivation of Planck's spectrum including zero-point radiation is given within classical physics from recent results involving the thermal effects of acceleration through classical electromagnetic zero-point radiation. A harmonic electric-dipole oscillator undergoing a uniform acceleration a through classical electromagnetic zero-point radiation responds as would the same oscillator in an inertial frame when not in zero-point radiation but in a different spectrum of random classical radiation. Since the equivalence principle tells us that the oscillator supported in a gravitational field g = -a will respond in the same way, we see that in a gravitational field we can construct a perpetual-motion machine based on this different spectrum unless the different spectrum corresponds to that of thermal equilibrium at a finite temperature. Therefore, assuming the absence of perpetual-motion machines of the first kind in a gravitational field, we conclude that the response of an oscillator accelerating through classical zero-point radiation must be that of a thermal system. This then determines the blackbody radiation spectrum in an inertial frame which turns out to be exactly Planck's spectrum including zero-point radiation

  6. Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations.

    Science.gov (United States)

    Huang, Rongyong; Zheng, Shunyi; Hu, Kun

    2018-06-01

    Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.

  7. Performance analysis of commercial multiple-input-multiple-output access point in distributed antenna system.

    Science.gov (United States)

    Fan, Yuting; Aighobahi, Anthony E; Gomes, Nathan J; Xu, Kun; Li, Jianqiang

    2015-03-23

    In this paper, we experimentally investigate the throughput of IEEE 802.11n 2x2 multiple-input-multiple-output (MIMO) signals in a radio-over-fiber-based distributed antenna system (DAS) with different fiber lengths and power imbalance. Both a MIMO-supported access point (AP) and a spatial-diversity-supported AP were separately employed in the experiments. Throughput measurements were carried out with wireless users at different locations in a typical office environment. For the different fiber length effect, the results indicate that MIMO signals can maintain high throughput when the fiber length difference between the two remote antenna units (RAUs) is under 100 m and falls quickly when the length difference is greater. For the spatial diversity signals, high throughput can be maintained even when the difference is 150 m. On the other hand, the separation of the MIMO antennas allows additional freedom in placing the antennas in strategic locations for overall improved system performance, although it may also lead to received power imbalance problems. The results show that the throughput performance drops in specific positions when the received power imbalance is above around 13 dB. Hence, there is a trade-off between the extent of the wireless coverage for moderate bit-rates and the area over which peak bit-rates can be achieved.

  8. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basing finite element methods on variational principles, especially if, as maximum and minimum principles, these can provide bounds and hence estimates of accuracy. The non-symmetric (and hence stationary rather than extremum principles) are seen however to play a significant role in optimisation theory. (Orig./A.B.)

  9. Photonic crystals possessing multiple Weyl points and the experimental observation of robust surface states

    Science.gov (United States)

    Chen, Wen-Jie; Xiao, Meng; Chan, C. T.

    2016-01-01

    Weyl points, as monopoles of Berry curvature in momentum space, have captured much attention recently in various branches of physics. Realizing topological materials that exhibit such nodal points is challenging and indeed, Weyl points have been found experimentally in transition metal arsenide and phosphide and gyroid photonic crystal whose structure is complex. If realizing even the simplest type of single Weyl nodes with a topological charge of 1 is difficult, then making a real crystal carrying higher topological charges may seem more challenging. Here we design, and fabricate using planar fabrication technology, a photonic crystal possessing single Weyl points (including type-II nodes) and multiple Weyl points with topological charges of 2 and 3. We characterize this photonic crystal and find nontrivial 2D bulk band gaps for a fixed kz and the associated surface modes. The robustness of these surface states against kz-preserving scattering is experimentally observed for the first time. PMID:27703140

  10. Molecular dynamics of polarizable point dipole models for molten NaI. Comparison with first principles simulations

    Directory of Open Access Journals (Sweden)

    Trullàs J.

    2011-05-01

    Full Text Available Molecular dynamics simulations of molten NaI at 995 K have been carried out using polarizable ion models based on rigid ion pair potentials to which the anion induced dipole polarization is added. The polarization is added in such a way that point dipoles are induced on the anions by both local electric field and deformation short-range damping interactions that oppose the electrically induced dipole moments. The structure and self-diffusion results are compared with those obtained by Galamba and Costa Cabral using first principles Hellmann-Feynman molecular dynamics simulations and using classical molecular dynamics of a shell model which allows only the iodide polarization

  11. Multiple-point statistical prediction on fracture networks at Yucca Mountain

    International Nuclear Information System (INIS)

    Liu, X.Y; Zhang, C.Y.; Liu, Q.S.; Birkholzer, J.T.

    2009-01-01

    In many underground nuclear waste repository systems, such as at Yucca Mountain, water flow rate and amount of water seepage into the waste emplacement drifts are mainly determined by hydrological properties of fracture network in the surrounding rock mass. Natural fracture network system is not easy to describe, especially with respect to its connectivity which is critically important for simulating the water flow field. In this paper, we introduced a new method for fracture network description and prediction, termed multi-point-statistics (MPS). The process of the MPS method is to record multiple-point statistics concerning the connectivity patterns of a fracture network from a known fracture map, and to reproduce multiple-scale training fracture patterns in a stochastic manner, implicitly and directly. It is applied to fracture data to study flow field behavior at the Yucca Mountain waste repository system. First, the MPS method is used to create a fracture network with an original fracture training image from Yucca Mountain dataset. After we adopt a harmonic and arithmetic average method to upscale the permeability to a coarse grid, THM simulation is carried out to study near-field water flow in the surrounding waste emplacement drifts. Our study shows that connectivity or patterns of fracture networks can be grasped and reconstructed by MPS methods. In theory, it will lead to better prediction of fracture system characteristics and flow behavior. Meanwhile, we can obtain variance from flow field, which gives us a way to quantify model uncertainty even in complicated coupled THM simulations. It indicates that MPS can potentially characterize and reconstruct natural fracture networks in a fractured rock mass with advantages of quantifying connectivity of fracture system and its simulation uncertainty simultaneously.

  12. Effect of point defects on the electronic density states of SnC nanosheets: First-principles calculations

    Directory of Open Access Journals (Sweden)

    Soleyman Majidi

    Full Text Available In this work, we investigated the electronic and structural properties of various defects including single Sn and C vacancies, double vacancy of the Sn and C atoms, anti-sites, position exchange and the Stone–Wales (SW defects in SnC nanosheets by using density-functional theory (DFT. We found that various vacancy defects in the SnC monolayer can change the electronic and structural properties. Our results show that the SnC is an indirect band gap compound, with the band gap of 2.10 eV. The system turns into metal for both structure of the single Sn and C vacancies. However, for the double vacancy contained Sn and C atoms, the structure remains semiconductor with the direct band gap of 0.37 eV at the G point. We also found that for anti-site defects, the structure remains semiconductor and for the exchange defect, the structure becomes indirect semiconductor with the K-G point and the band gap of 0.74 eV. Finally, the structure of SW defect remains semiconductor with the direct band gap at K point with band gap of 0.54 eV. Keywords: SnC nanosheets, Density-functional theory, First-principles calculations, Electronic density of states, Band gap

  13. Channel capacity of TDD-OFDM-MIMO for multiple access points in a wireless single-frequency-network

    DEFF Research Database (Denmark)

    Takatori, Y.; Fitzek, Frank; Tsunekawa, K.

    2005-01-01

    MIMO data transmission scheme, which combines Single-Frequency-Network (SFN) with TDD-OFDM-MIMO applied for wireless LAN networks. In our proposal, we advocate to use SFN for multiple access points (MAP) MIMO data transmission. The goal of this approach is to achieve very high channel capacity in both......The multiple-input-multiple-output (MIMO) technique is the most attractive candidate to improve the spectrum efficiency in the next generation wireless communication systems. However, the efficiency of MIMO techniques reduces in the line of sight (LOS) environments. In this paper, we propose a new...

  14. Point spread function due to multiple scattering of light in the atmosphere

    International Nuclear Information System (INIS)

    Pękala, J.; Wilczyński, H.

    2013-01-01

    The atmospheric scattering of light has a significant influence on the results of optical observations of air showers. It causes attenuation of direct light from the shower, but also contributes a delayed signal to the observed light. The scattering of light therefore should be accounted for, both in simulations of air shower detection and reconstruction of observed events. In this work a Monte Carlo simulation of multiple scattering of light has been used to determine the contribution of the scattered light in observations of a point source of light. Results of the simulations and a parameterization of the angular distribution of the scattered light contribution to the observed signal (the point spread function) are presented. -- Author-Highlights: •Analysis of atmospheric scattering of light from an isotropic point source. •Different geometries and atmospheric conditions were investigated. •A parameterization of scattered light distribution has been developed. •The parameterization allows one to easily account for the light scattering in air. •The results will be useful in analyses of observations of extensive air shower

  15. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    Science.gov (United States)

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  16. Variogram based and Multiple - Point Statistical simulation of shallow aquifer structures in the Upper Salzach valley, Austria

    Science.gov (United States)

    Jandrisevits, Carmen; Marschallinger, Robert

    2014-05-01

    Quarternary sediments in overdeepened alpine valleys and basins in the Eastern Alps bear substantial groundwater resources. The associated aquifer systems are generally geometrically complex with highly variable hydraulic properties. 3D geological models provide predictions of both geometry and properties of the subsurface required for subsequent modelling of groundwater flow and transport. In hydrology, geostatistical Kriging and Kriging based conditional simulations are widely used to predict the spatial distribution of hydrofacies. In the course of investigating the shallow aquifer structures in the Zell basin in the Upper Salzach valley (Salzburg, Austria), a benchmark of available geostatistical modelling and simulation methods was performed: traditional variogram based geostatistical methods, i.e. Indicator Kriging, Sequential Indicator Simulation and Sequential Indicator Co - Simulation were used as well as Multiple Point Statistics. The ~ 6 km2 investigation area is sampled by 56 drillings with depths of 5 to 50 m; in addition, there are 2 geophysical sections with lengths of 2 km and depths of 50 m. Due to clustered drilling sites, indicator Kriging models failed to consistently model the spatial variability of hydrofacies. Using classical variogram based geostatistical simulation (SIS), equally probable realizations were generated with differences among the realizations providing an uncertainty measure. The yielded models are unstructured from a geological point - they do not portray the shapes and lateral extensions of associated sedimentary units. Since variograms consider only two - point spatial correlations, they are unable to capture the spatial variability of complex geological structures. The Multiple Point Statistics approach overcomes these limitations of two point statistics as it uses a Training image instead of variograms. The 3D Training Image can be seen as a reference facies model where geological knowledge about depositional

  17. A Comparison of Combustion Dynamics for Multiple 7-Point Lean Direct Injection Combustor Configurations

    Science.gov (United States)

    Tacina, K. M.; Hicks, Y. R.

    2017-01-01

    The combustion dynamics of multiple 7-point lean direct injection (LDI) combustor configurations are compared. LDI is a fuel-lean combustor concept for aero gas turbine engines in which multiple small fuel-air mixers replace one traditionally-sized fuel-air mixer. This 7-point LDI configuration has a circular cross section, with a center (pilot) fuel-air mixer surrounded by six outer (main) fuel-air mixers. Each fuel-air mixer consists of an axial air swirler followed by a converging-diverging venturi. A simplex fuel injector is inserted through the center of the air swirler, with the fuel injector tip located near the venturi throat. All 7 fuel-air mixers are identical except for the swirler blade angle, which varies with the configuration. Testing was done in a 5-atm flame tube with inlet air temperatures from 600 to 800 F and equivalence ratios from 0.4 to 0.7. Combustion dynamics were measured using a cooled PCB pressure transducer flush-mounted in the wall of the combustor test section.

  18. First principles calculations of interstitial and lamellar rhenium nitrides

    Energy Technology Data Exchange (ETDEWEB)

    Soto, G., E-mail: gerardo@cnyn.unam.mx [Universidad Nacional Autonoma de Mexico, Centro de Nanociencias y Nanotecnologia, Km 107 Carretera Tijuana-Ensenada, Ensenada Baja California (Mexico); Tiznado, H.; Reyes, A.; Cruz, W. de la [Universidad Nacional Autonoma de Mexico, Centro de Nanociencias y Nanotecnologia, Km 107 Carretera Tijuana-Ensenada, Ensenada Baja California (Mexico)

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer The possible structures of rhenium nitride as a function of composition are analyzed. Black-Right-Pointing-Pointer The alloying energy is favorable for rhenium nitride in lamellar arrangements. Black-Right-Pointing-Pointer The structures produced by magnetron sputtering are metastable variations. Black-Right-Pointing-Pointer The structures produced by high-pressure high-temperature are stable configurations. Black-Right-Pointing-Pointer The lamellar structures are a new category of interstitial dissolutions. - Abstract: We report here a systematic first principles study of two classes of variable-composition rhenium nitride: i, interstitial rhenium nitride as a solid solution and ii, rhenium nitride in lamellar structures. The compounds in class i are cubic and hexagonal close-packed rhenium phases, with nitrogen in the octahedral and tetrahedral interstices of the metal, and they are formed without changes to the structure, except for slight distortions of the unit cells. In the compounds in class ii, by contrast, the nitrogen inclusion provokes stacking faults in the parent metal structure. These faults create trigonal-prismatic sites where the nitrogen residence is energetically favored. This second class of compounds produces lamellar structures, where the nitrogen lamellas are inserted among multiple rhenium layers. The Re{sub 3}N and Re{sub 2}N phases produced recently by high-temperature and high-pressure synthesis belong to this class. The ratio of the nitrogen layers to the rhenium layers is given by the composition. While the first principle calculations point to higher stability for the lamellar structures as opposed to the interstitial phases, the experimental evidence presented here demonstrates that the interstitial classes are synthesizable by plasma methods. We conclude that rhenium nitrides possess polymorphism and that the two-dimensional lamellar structures might represent an emerging class of materials

  19. 77 FR 34211 - Modification of Multiple Compulsory Reporting Points; Continental United States, Alaska and Hawaii

    Science.gov (United States)

    2012-06-11

    ... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2012-0130; Airspace Docket No. 12-AWA-2] RIN 2120-AA66 Modification of Multiple Compulsory Reporting Points; Continental United States, Alaska and Hawaii AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final...

  20. How Many Principles for Public Health Ethics?

    OpenAIRE

    Coughlin, Steven S.

    2008-01-01

    General moral (ethical) principles play a prominent role in certain methods of moral reasoning and ethical decision-making in bioethics and public health. Examples include the principles of respect for autonomy, beneficence, nonmaleficence, and justice. Some accounts of ethics in public health have pointed to additional principles related to social and environmental concerns, such as the precautionary principle and principles of solidarity or social cohesion. This article provides an overview...

  1. Consolidated principles for screening based on a systematic review and consensus process.

    Science.gov (United States)

    Dobrow, Mark J; Hagens, Victoria; Chafe, Roger; Sullivan, Terrence; Rabeneck, Linda

    2018-04-09

    In 1968, Wilson and Jungner published 10 principles of screening that often represent the de facto starting point for screening decisions today; 50 years on, are these principles still the right ones? Our objectives were to review published work that presents principles for population-based screening decisions since Wilson and Jungner's seminal publication, and to conduct a Delphi consensus process to assess the review results. We conducted a systematic review and modified Delphi consensus process. We searched multiple databases for articles published in English in 1968 or later that were intended to guide population-based screening decisions, described development and modification of principles, and presented principles as a set or list. Identified sets were compared for basic characteristics (e.g., number, categorization), a citation analysis was conducted, and principles were iteratively synthesized and consolidated into categories to assess evolution. Participants in the consensus process assessed the level of agreement with the importance and interpretability of the consolidated screening principles. We identified 41 sets and 367 unique principles. Each unique principle was coded to 12 consolidated decision principles that were further categorized as disease/condition, test/intervention or program/system principles. Program or system issues were the focus of 3 of Wilson and Jungner's 10 principles, but comprised almost half of all unique principles identified in the review. The 12 consolidated principles were assessed through 2 rounds of the consensus process, leading to specific refinements to improve their relevance and interpretability. No gaps or missing principles were identified. Wilson and Jungner's principles are remarkably enduring, but increasingly reflect a truncated version of contemporary thinking on screening that does not fully capture subsequent focus on program or system principles. Ultimately, this review and consensus process provides a

  2. Consolidated principles for screening based on a systematic review and consensus process

    Science.gov (United States)

    Hagens, Victoria; Chafe, Roger; Sullivan, Terrence; Rabeneck, Linda

    2018-01-01

    BACKGROUND: In 1968, Wilson and Jungner published 10 principles of screening that often represent the de facto starting point for screening decisions today; 50 years on, are these principles still the right ones? Our objectives were to review published work that presents principles for population-based screening decisions since Wilson and Jungner’s seminal publication, and to conduct a Delphi consensus process to assess the review results. METHODS: We conducted a systematic review and modified Delphi consensus process. We searched multiple databases for articles published in English in 1968 or later that were intended to guide population-based screening decisions, described development and modification of principles, and presented principles as a set or list. Identified sets were compared for basic characteristics (e.g., number, categorization), a citation analysis was conducted, and principles were iteratively synthesized and consolidated into categories to assess evolution. Participants in the consensus process assessed the level of agreement with the importance and interpretability of the consolidated screening principles. RESULTS: We identified 41 sets and 367 unique principles. Each unique principle was coded to 12 consolidated decision principles that were further categorized as disease/condition, test/intervention or program/system principles. Program or system issues were the focus of 3 of Wilson and Jungner’s 10 principles, but comprised almost half of all unique principles identified in the review. The 12 consolidated principles were assessed through 2 rounds of the consensus process, leading to specific refinements to improve their relevance and interpretability. No gaps or missing principles were identified. INTERPRETATION: Wilson and Jungner’s principles are remarkably enduring, but increasingly reflect a truncated version of contemporary thinking on screening that does not fully capture subsequent focus on program or system principles

  3. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    In this article survey of variational principles has been given. Variational principles play a significant role in mathematical theory with emphasis on the physical aspects. There are two principals used i.e. to represent the equation of the system in a succinct way and to enable a particular computation in the system to be carried out with greater accuracy. The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basic finite element methods on variational principles. (A.B.)

  4. Existence and Multiplicity Results for Nonlinear Differential Equations Depending on a Parameter in Semipositone Case

    Directory of Open Access Journals (Sweden)

    Hailong Zhu

    2012-01-01

    Full Text Available The existence and multiplicity of solutions for second-order differential equations with a parameter are discussed in this paper. We are mainly concerned with the semipositone case. The analysis relies on the nonlinear alternative principle of Leray-Schauder and Krasnosel'skii's fixed point theorem in cones.

  5. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  6. Framework for assessing causality in disease management programs: principles.

    Science.gov (United States)

    Wilson, Thomas; MacDowell, Martin

    2003-01-01

    To credibly state that a disease management (DM) program "caused" a specific outcome it is required that metrics observed in the DM population be compared with metrics that would have been expected in the absence of a DM intervention. That requirement can be very difficult to achieve, and epidemiologists and others have developed guiding principles of causality by which credible estimates of DM impact can be made. This paper introduces those key principles. First, DM program metrics must be compared with metrics from a "reference population." This population should be "equivalent" to the DM intervention population on all factors that could independently impact the outcome. In addition, the metrics used in both groups should use the same defining criteria (ie, they must be "comparable" to each other). The degree to which these populations fulfill the "equivalent" assumption and metrics fulfill the "comparability" assumption should be stated. Second, when "equivalence" or "comparability" is not achieved, the DM managers should acknowledge this fact and, where possible, "control" for those factors that may impact the outcome(s). Finally, it is highly unlikely that one study will provide definitive proof of any specific DM program value for all time; thus, we strongly recommend that studies be ongoing, at multiple points in time, and at multiple sites, and, when observational study designs are employed, that more than one type of study design be utilized. Methodologically sophisticated studies that follow these "principles of causality" will greatly enhance the reputation of the important and growing efforts in DM.

  7. Achieving Integration in Mixed Methods Designs—Principles and Practices

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  8. Mechanical engineering principles

    CERN Document Server

    Bird, John

    2014-01-01

    A student-friendly introduction to core engineering topicsThis book introduces mechanical principles and technology through examples and applications, enabling students to develop a sound understanding of both engineering principles and their use in practice. These theoretical concepts are supported by 400 fully worked problems, 700 further problems with answers, and 300 multiple-choice questions, all of which add up to give the reader a firm grounding on each topic.The new edition is up to date with the latest BTEC National specifications and can also be used on undergraduate courses in mecha

  9. Conservation and balance principles approach in NPP decommissioning

    International Nuclear Information System (INIS)

    Anton, V.

    1997-01-01

    In this work some principles of mass, energy, activity level conservation are formulated. When some conditioning or treatment procedures for High Level Waste (HLW) are applied then the corresponding balance principles operate. It is important to note that in AECL computing code DECOM an analysis of different versions of decommissioning based on cost consideration is given.. Our approach pointed out many possibilities which are to be taken into account in the NPP decommissioning, besides the minimum cost principle. We also remarked other circumstances pointing to other conservation principles and to the corresponding balance principles. In our opinion this is the first approach of this kind in international literature. With the progress which is expected in decommissioning techniques some of the considerations presented in this work have to be developed and detailed. (author)

  10. The Independence of Markov's Principle in Type Theory

    DEFF Research Database (Denmark)

    Coquand, Thierry; Mannaa, Bassel

    2017-01-01

    for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension......In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold......, and it follows that it cannot be proved in type theory....

  11. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  12. Quantum principles in field interactions

    International Nuclear Information System (INIS)

    Shirkov, D.V.

    1986-01-01

    The concept of quantum principle is intruduced as a principle whosee formulation is based on specific quantum ideas and notions. We consider three such principles, viz. those of quantizability, local gauge symmetry, and supersymmetry, and their role in the development of the quantum field theory (QFT). Concerning the first of these, we analyze the formal aspects and physical contents of the renormalization procedure in QFT and its relation to ultraviolet divergences and the renorm group. The quantizability principle is formulated as an existence condition of a self-consistent quantum version with a given mechanism of the field interaction. It is shown that the consecutive (from a historial point of view) use of these quantum principles puts still larger limitations on possible forms of field interactions

  13. Variational principles in physics

    CERN Document Server

    Basdevant, Jean-Louis

    2007-01-01

    Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...

  14. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    Directory of Open Access Journals (Sweden)

    Barash Danny

    2008-04-01

    Full Text Available Abstract Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3, for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary

  15. X-ray diffraction imaging with the Multiple Inverse Fan Beam topology: Principles, performance and potential for security screening

    Energy Technology Data Exchange (ETDEWEB)

    Harding, G., E-mail: Geoffrey.Harding@Morphodetection.com [Morpho Detection Germany GmbH, Heselstuecken 3, 22453 Hamburg (Germany); Fleckenstein, H.; Kosciesza, D.; Olesinski, S.; Strecker, H.; Theedt, T.; Zienert, G. [Morpho Detection Germany GmbH, Heselstuecken 3, 22453 Hamburg (Germany)

    2012-07-15

    The steadily increasing number of explosive threat classes, including home-made explosives (HMEs), liquids, amorphous and gels (LAGs), is forcing up the false-alarm rates of security screening equipment. This development can best be countered by increasing the number of features available for classification. X-ray diffraction intrinsically offers multiple features for both solid and LAGs explosive detection, and is thus becoming increasingly important for false-alarm and cost reduction in both carry-on and checked baggage security screening. Following a brief introduction to X-ray diffraction imaging (XDI), which synthesizes in a single modality the image-forming and material-analysis capabilities of X-rays, the Multiple Inverse Fan Beam (MIFB) XDI topology is described. Physical relationships obtaining in such MIFB XDI components as the radiation source, collimators and room-temperature detectors are presented with experimental performances that have been achieved. Representative X-ray diffraction profiles of threat substances measured with a laboratory MIFB XDI system are displayed. The performance of Next-Generation (MIFB) XDI relative to that of the 2nd Generation XRD 3500{sup TM} screener (Morpho Detection Germany GmbH) is assessed. The potential of MIFB XDI, both for reducing the exorbitant cost of false alarms in hold baggage screening (HBS), as well as for combining 'in situ' liquid and solid explosive detection in carry-on luggage screening is outlined. - Highlights: Black-Right-Pointing-Pointer X-ray diffraction imaging (XDI) synthesizes analysis and imaging in one x-ray modality. Black-Right-Pointing-Pointer A novel XDI beam topology comprising multiple inverse fan-beams (MIFB) is described. Black-Right-Pointing-Pointer The MIFB topology is technically easy to realize and has high photon collection efficiency. Black-Right-Pointing-Pointer Applications are envisaged in checkpoint, hold baggage and cargo screening.

  16. Dynamic analysis of multiple nuclear-coupled boiling channels based on a multi-point reactor model

    International Nuclear Information System (INIS)

    Lee, J.D.; Pan Chin

    2005-01-01

    This work investigates the non-linear dynamics and stabilities of a multiple nuclear-coupled boiling channel system based on a multi-point reactor model using the Galerkin nodal approximation method. The nodal approximation method for the multiple boiling channels developed by Lee and Pan [Lee, J.D., Pan, C., 1999. Dynamics of multiple parallel boiling channel systems with forced flows. Nucl. Eng. Des. 192, 31-44] is extended to address the two-phase flow dynamics in the present study. The multi-point reactor model, modified from Uehiro et al. [Uehiro, M., Rao, Y.F., Fukuda, K., 1996. Linear stability analysis on instabilities of in-phase and out-of-phase modes in boiling water reactors. J. Nucl. Sci. Technol. 33, 628-635], is employed to study a multiple-channel system with unequal steady-state neutron density distribution. Stability maps, non-linear dynamics and effects of major parameters on the multiple nuclear-coupled boiling channel system subject to a constant total flow rate are examined. This study finds that the void-reactivity feedback and neutron interactions among subcores are coupled and their competing effects may influence the system stability under different operating conditions. For those cases with strong neutron interaction conditions, by strengthening the void-reactivity feedback, the nuclear-coupled effect on the non-linear dynamics may induce two unstable oscillation modes, the supercritical Hopf bifurcation and the subcritical Hopf bifurcation. Moreover, for those cases with weak neutron interactions, by quadrupling the void-reactivity feedback coefficient, period-doubling and complex chaotic oscillations may appear in a three-channel system under some specific operating conditions. A unique type of complex chaotic attractor may evolve from the Rossler attractor because of the coupled channel-to-channel thermal-hydraulic and subcore-to-subcore neutron interactions. Such a complex chaotic attractor has the imbedding dimension of 5 and the

  17. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  18. The uncertainty principle

    International Nuclear Information System (INIS)

    Martens, Hans.

    1991-01-01

    The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs

  19. Achieving integration in mixed methods designs-principles and practices.

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  20. Multiple Positive Solutions of a Nonlinear Four-Point Singular Boundary Value Problem with a p-Laplacian Operator on Time Scales

    Directory of Open Access Journals (Sweden)

    Shihuang Hong

    2009-01-01

    Full Text Available We present sufficient conditions for the existence of at least twin or triple positive solutions of a nonlinear four-point singular boundary value problem with a p-Laplacian dynamic equation on a time scale. Our results are obtained via some new multiple fixed point theorems.

  1. Search for Pauli exclusion principle violating atomic transitions and electron decay with a p-type point contact germanium detector

    Energy Technology Data Exchange (ETDEWEB)

    Abgrall, N.; Bradley, A.W.; Chan, Y.D.; Mertens, S.; Poon, A.W.P. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Arnquist, I.J.; Hoppe, E.W.; Kouzes, R.T.; LaFerriere, B.D.; Orrell, J.L. [Pacific Northwest National Laboratory, Richland, WA (United States); Avignone, F.T. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); Barabash, A.S.; Konovalov, S.I.; Yumatov, V. [National Research Center ' ' Kurchatov Institute' ' Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Bertrand, F.E.; Galindo-Uribarri, A.; Radford, D.C.; Varner, R.L.; White, B.R.; Yu, C.H. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Brudanin, V.; Shirchenko, M.; Vasilyev, S.; Yakushev, E.; Zhitnikov, I. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Busch, M. [Duke University, Department of Physics, Durham, NC (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); Buuck, M.; Cuesta, C.; Detwiler, J.A.; Gruszko, J.; Guinn, I.S.; Leon, J.; Robertson, R.G.H. [University of Washington, Department of Physics, Center for Experimental Nuclear Physics and Astrophysics, Seattle, WA (United States); Caldwell, A.S.; Christofferson, C.D.; Dunagan, C.; Howard, S.; Suriano, A.M. [South Dakota School of Mines and Technology, Rapid City, SD (United States); Chu, P.H.; Elliott, S.R.; Goett, J.; Massarczyk, R.; Rielage, K. [Los Alamos National Laboratory, Los Alamos, NM (United States); Efremenko, Yu. [University of Tennessee, Department of Physics and Astronomy, Knoxville, TN (United States); Ejiri, H. [Osaka University, Research Center for Nuclear Physics, Ibaraki, Osaka (Japan); Finnerty, P.S.; Gilliss, T.; Giovanetti, G.K.; Henning, R.; Howe, M.A.; MacMullin, J.; Meijer, S.J.; O' Shaughnessy, C.; Rager, J.; Shanks, B.; Trimble, J.E.; Vorren, K.; Xu, W. [Triangle Universities Nuclear Laboratory, Durham, NC (United States); University of North Carolina, Department of Physics and Astronomy, Chapel Hill, NC (United States); Green, M.P. [North Carolina State University, Department of Physics, Raleigh, NC (United States); Oak Ridge National Laboratory, Oak Ridge, TN (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); Guiseppe, V.E.; Tedeschi, D.; Wiseman, C. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); Jasinski, B.R. [University of South Dakota, Department of Physics, Vermillion, SD (United States); Keeter, K.J. [Black Hills State University, Department of Physics, Spearfish, SD (United States); Kidd, M.F. [Tennessee Tech University, Cookeville, TN (United States); Martin, R.D. [Queen' s University, Department of Physics, Engineering Physics and Astronomy, Kingston, ON (Canada); Romero-Romero, E. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); University of Tennessee, Department of Physics and Astronomy, Knoxville, TN (United States); Vetter, K. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); University of California, Department of Nuclear Engineering, Berkeley, CA (United States); Wilkerson, J.F. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); University of North Carolina, Department of Physics and Astronomy, Chapel Hill, NC (United States)

    2016-11-15

    A search for Pauli-exclusion-principle-violating K{sub α} electron transitions was performed using 89.5 kg-d of data collected with a p-type point contact high-purity germanium detector operated at the Kimballton Underground Research Facility. A lower limit on the transition lifetime of 5.8 x 10{sup 30} s at 90% C.L. was set by looking for a peak at 10.6 keV resulting from the X-ray and Auger electrons present following the transition. A similar analysis was done to look for the decay of atomic K-shell electrons into neutrinos, resulting in a lower limit of 6.8 x 10{sup 30} s at 90% C.L. It is estimated that the Majorana Demonstrator, a 44 kg array of p-type point contact detectors that will search for the neutrinoless double-beta decay of {sup 76}Ge, could improve upon these exclusion limits by an order of magnitude after three years of operation. (orig.)

  2. Radiation chemistry; principles and applications

    International Nuclear Information System (INIS)

    Aziz, F.; Rodgers, M.A.J.

    1994-01-01

    The book attempts to present those fields of radiation chemistry which depend on the principles of radiation chemistry. The first four chapters are some prelude about radiation chemistry principles with respect to how ionizing radiation interacts with matter, and primary results from these interactions and, which kinetic laws are followed by these primary interactions and which equipment for qualitative studies is necessary. Following chapters included principles fields of radiation chemistry. The last six chapters discussed of principle of chemistry from physical and chemical point of view. In this connection the fundamentals of radiation on biological system is emphasised. On one hand, the importance of it for hygiene and safety as neoplasms therapy is discussed. on the other hand, its industrial importance is presented

  3. Determine point-to-point networking interactions using regular expressions

    Directory of Open Access Journals (Sweden)

    Konstantin S. Deev

    2015-06-01

    Full Text Available As Internet growth and becoming more popular, the number of concurrent data flows start to increasing, which makes sense in bandwidth requested. Providers and corporate customers need ability to identify point-to-point interactions. The best is to use special software and hardware implementations that distribute the load in the internals of the complex, using the principles and approaches, in particular, described in this paper. This paper represent the principles of building system, which searches for a regular expression match using computing on graphics adapter in server station. A significant computing power and capability to parallel execution on modern graphic processor allows inspection of large amounts of data through sets of rules. Using the specified characteristics can lead to increased computing power in 30…40 times compared to the same setups on the central processing unit. The potential increase in bandwidth capacity could be used in systems that provide packet analysis, firewalls and network anomaly detectors.

  4. Principles of Eliminating Access Control Lists within a Domain

    Directory of Open Access Journals (Sweden)

    Vic Grout

    2012-04-01

    Full Text Available The infrastructure of large networks is broken down into areas that have a common security policy called a domain. Security within a domain is commonly implemented at all nodes. However this can have a negative effect on performance since it introduces a delay associated with packet filtering. When Access Control Lists (ACLs are used within a router for this purpose then a significant overhead is introduced associated with this process. It is likely that identical checks are made at multiple points within a domain prior to a packet reaching its destination. Therefore by eliminating ACLs within a domain by modifying the ingress/egress points with equivalent functionality an improvement in the overall performance can be obtained. This paper considers the effect of the delays when using router operating systems offering different levels of functionality. It considers factors which contribute to the delay particularly due to ACLs and by using theoretical principles modified by practical calculation a model is created. Additionally this paper provides an example of an optimized solution which reduces the delay through network routers by distributing the security rules to the ingress/egress points of the domain without affecting the security policy.

  5. Electrical principles 3 checkbook

    CERN Document Server

    Bird, J O

    2013-01-01

    Electrical Principles 3 Checkbook aims to introduce students to the basic electrical principles needed by technicians in electrical engineering, electronics, and telecommunications.The book first tackles circuit theorems, single-phase series A.C. circuits, and single-phase parallel A.C. circuits. Discussions focus on worked problems on parallel A.C. circuits, worked problems on series A.C. circuits, main points concerned with D.C. circuit analysis, worked problems on circuit theorems, and further problems on circuit theorems. The manuscript then examines three-phase systems and D.C. transients

  6. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  7. The application of pragmatic principles in competitive business writing

    Directory of Open Access Journals (Sweden)

    Wu Haihong

    2016-01-01

    Full Text Available Business English writing, as an important means of communication, plays a vital role in international business communication. And pragmatic principle, as the universal principle, exists in all communication situations. This paper gives a brief introduction to the pragmatic principles and business English writing principles and illustrates the high consistency between these principles. By analyzing samples, it also points out the instructive significance of pragmatic principles in competitive business English writing. To a certain extent, this provides a theoretical support for the research of business English writing.

  8. First principles calculation of point defects and mobility degradation in bulk AlSb for radiation detection application

    International Nuclear Information System (INIS)

    Lordi, V; Aberg, D; Erhart, P; Wu, K J

    2007-01-01

    The development of high resolution, room temperature semiconductor radiation detectors requires the introduction of materials with increased carrier mobility-lifetime (μτ) product, while having a band gap in the 1.4-2.2 eV range. AlSb is a promising material for this application. However, systematic improvements in the material quality are necessary to achieve an adequate μτ product. We are using a combination of simulation and experiment to develop a fundamental understanding of the factors which affect detector material quality. First principles calculations are used to study the microscopic mechanisms of mobility degradation from point defects and to calculate the intrinsic limit of mobility from phonon scattering. We use density functional theory (DFT) to calculate the formation energies of native and impurity point defects, to determine their equilibrium concentrations as a function of temperature and charge state. Perturbation theory via the Born approximation is coupled with Boltzmann transport theory to calculate the contribution toward mobility degradation of each type of point defect, using DFT-computed carrier scattering rates. A comparison is made to measured carrier concentrations and mobilities from AlSb crystals grown in our lab. We find our predictions in good quantitative agreement with experiment, allowing optimized annealing conditions to be deduced. A major result is the determination of oxygen impurity as a severe mobility killer, despite the ability of oxygen to compensation dope AlSb and reduce the net carrier concentration. In this case, increased resistivity is not a good indicator of improved material performance, due to the concomitant sharp reduction in μτ

  9. Does the relativity principle violate?

    International Nuclear Information System (INIS)

    Barashenkov, V.S.

    1994-01-01

    Theoretical and experimental data about a possible existence in Nature of some preferred reference frame with a violation of the principle of relativity are considered. The Einstein's and Lorentz's points of view are compared. Although some experiments are known which, in opinion of their authors, indicate the relativity principle violation persuasive evidences supporting this conclusion are absent for the present. The proposals of new experiments in this region, particularly with electron spin precession, are discussed. 55 refs., 4 figs

  10. Multiple types of motives don't multiply the motivation of West Point cadets.

    Science.gov (United States)

    Wrzesniewski, Amy; Schwartz, Barry; Cong, Xiangyu; Kane, Michael; Omar, Audrey; Kolditz, Thomas

    2014-07-29

    Although people often assume that multiple motives for doing something will be more powerful and effective than a single motive, research suggests that different types of motives for the same action sometimes compete. More specifically, research suggests that instrumental motives, which are extrinsic to the activities at hand, can weaken internal motives, which are intrinsic to the activities at hand. We tested whether holding both instrumental and internal motives yields negative outcomes in a field context in which various motives occur naturally and long-term educational and career outcomes are at stake. We assessed the impact of the motives of over 10,000 West Point cadets over the period of a decade on whether they would become commissioned officers, extend their officer service beyond the minimum required period, and be selected for early career promotions. For each outcome, motivation internal to military service itself predicted positive outcomes; a relationship that was negatively affected when instrumental motives were also in evidence. These results suggest that holding multiple motives damages persistence and performance in educational and occupational contexts over long periods of time.

  11. Existence, Multiplicity, and Stability of Positive Solutions of a Predator-Prey Model with Dinosaur Functional Response

    Directory of Open Access Journals (Sweden)

    Xiaozhou Feng

    2017-01-01

    Full Text Available We investigate the property of positive solutions of a predator-prey model with Dinosaur functional response under Dirichlet boundary conditions. Firstly, using the comparison principle and fixed point index theory, the sufficient conditions and necessary conditions on coexistence of positive solutions of a predator-prey model with Dinosaur functional response are established. Secondly, by virtue of bifurcation theory, perturbation theory of eigenvalues, and the fixed point index theory, we establish the bifurcation of positive solutions of the model and obtain the stability and multiplicity of the positive solution under certain conditions. Furthermore, the local uniqueness result is studied when b and d are small enough. Finally, we investigate the multiplicity, uniqueness, and stability of positive solutions when k>0 is sufficiently large.

  12. Bridges between multiple-point geostatistics and texture synthesis: Review and guidelines for future research

    Science.gov (United States)

    Mariethoz, Gregoire; Lefebvre, Sylvain

    2014-05-01

    Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.

  13. A micro dew point sensor with a thermal detection principle

    Science.gov (United States)

    Kunze, M.; Merz, J.; Hummel, W.-J.; Glosch, H.; Messner, S.; Zengerle, R.

    2012-01-01

    We present a dew point temperature sensor with the thermal detection of condensed water on a thin membrane, fabricated by silicon micromachining. The membrane (600 × 600 × ~1 µm3) is part of a silicon chip and contains a heating element as well as a thermopile for temperature measurement. By dynamically heating the membrane and simultaneously analyzing the transient increase of its temperature it is detected whether condensed water is on the membrane or not. To cool the membrane down, a peltier cooler is used and electronically controlled in a way that the temperature of the membrane is constantly held at a value where condensation of water begins. This temperature is measured and output as dew point temperature. The sensor system works in a wide range of dew point temperatures between 1 K and down to 44 K below air temperature. In experimental investigations it could be proven that the deviation of the measured dew point temperatures compared to reference values is below ±0.2 K in an air temperature range of 22 to 70 °C. At low dew point temperatures of -20 °C (air temperature = 22 °C) the deviation increases to nearly -1 K.

  14. A micro dew point sensor with a thermal detection principle

    International Nuclear Information System (INIS)

    Kunze, M; Merz, J; Glosch, H; Messner, S; Zengerle, R; Hummel, W-J

    2012-01-01

    We present a dew point temperature sensor with the thermal detection of condensed water on a thin membrane, fabricated by silicon micromachining. The membrane (600 × 600 × ∼1 µm 3 ) is part of a silicon chip and contains a heating element as well as a thermopile for temperature measurement. By dynamically heating the membrane and simultaneously analyzing the transient increase of its temperature it is detected whether condensed water is on the membrane or not. To cool the membrane down, a peltier cooler is used and electronically controlled in a way that the temperature of the membrane is constantly held at a value where condensation of water begins. This temperature is measured and output as dew point temperature. The sensor system works in a wide range of dew point temperatures between 1 K and down to 44 K below air temperature. In experimental investigations it could be proven that the deviation of the measured dew point temperatures compared to reference values is below ±0.2 K in an air temperature range of 22 to 70 °C. At low dew point temperatures of −20 °C (air temperature = 22 °C) the deviation increases to nearly −1 K

  15. Assisting People with Multiple Disabilities by Improving Their Computer Pointing Efficiency with an Automatic Target Acquisition Program

    Science.gov (United States)

    Shih, Ching-Hsiang; Shih, Ching-Tien; Peng, Chin-Ling

    2011-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance through an Automatic Target Acquisition Program (ATAP) and a newly developed mouse driver (i.e. a new mouse driver replaces standard mouse driver, and is able to monitor mouse movement and intercept click action). Initially, both…

  16. Higher moments of net kaon multiplicity distributions at RHIC energies for the search of QCD Critical Point at STAR

    Directory of Open Access Journals (Sweden)

    Sarkar Amal

    2013-11-01

    Full Text Available In this paper we report the measurements of the various moments mean (M, standard deviation (σ skewness (S and kurtosis (κ of the net-Kaon multiplicity distribution at midrapidity from Au+Au collisions at √sNN = 7.7 to 200 GeV in the STAR experiment at RHIC in an effort to locate the critical point in the QCD phase diagram. These moments and their products are related to the thermodynamic susceptibilities of conserved quantities such as net baryon number, net charge, and net strangeness as also to the correlation length of the system. A non-monotonic behavior of these variable indicate the presence of the critical point. In this work we also present the moments products Sσ, κσ2 of net-Kaon multiplicity distribution as a function of collision centrality and energies. The energy and the centrality dependence of higher moments of net-Kaons and their products have been compared with it0s Poisson expectation and with simulations from AMPT which does not include the critical point. From the measurement at all seven available beam energies, we find no evidence for a critical point in the QCD phase diagram for √sNN below 200 GeV.

  17. Microhydrodynamics principles and selected applications

    CERN Document Server

    Kim, Sangtae; Brenner, Howard

    1991-01-01

    Microhydrodynamics: Principles and Selected Applications presents analytical and numerical methods for describing motion of small particles suspended in viscous fluids. The text first covers the fundamental principles of low-Reynolds-number flow, including the governing equations and fundamental theorems; the dynamics of a single particle in a flow field; and hydrodynamic interactions between suspended particles. Next, the book deals with the advances in the mathematical and computational aspects of viscous particulate flows that point to innovations for large-scale simulations on parallel co

  18. The fundamental principles of the physical protection, the group of six point of view

    International Nuclear Information System (INIS)

    Claeys, M.; Carnas, L.; Robeyns, G.; Rommevaux, G.; Venot, R.; Hagemann, A.; Fontaneda Gonzalez, A.; Gimenez Gonzalez, S.; Isaksson, S.G.; Wager, K.; Price, C.

    2001-01-01

    This paper presents the joint experience of the Group of Six in the field of physical protection against the theft or unauthorized removal of nuclear material and against the sabotage of nuclear material and nuclear facilities, which emerged from the joint discussion. Several fundamental principles stem from this experience. Of course the particular terms and conditions of the implementation of these principles are specific to each country. (authors)

  19. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    Science.gov (United States)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar

  20. Progress in classical and quantum variational principles

    International Nuclear Information System (INIS)

    Gray, C G; Karl, G; Novikov, V A

    2004-01-01

    We review the development and practical uses of a generalized Maupertuis least action principle in classical mechanics in which the action is varied under the constraint of fixed mean energy for the trial trajectory. The original Maupertuis (Euler-Lagrange) principle constrains the energy at every point along the trajectory. The generalized Maupertuis principle is equivalent to Hamilton's principle. Reciprocal principles are also derived for both the generalized Maupertuis and the Hamilton principles. The reciprocal Maupertuis principle is the classical limit of Schroedinger's variational principle of wave mechanics and is also very useful to solve practical problems in both classical and semiclassical mechanics, in complete analogy with the quantum Rayleigh-Ritz method. Classical, semiclassical and quantum variational calculations are carried out for a number of systems, and the results are compared. Pedagogical as well as research problems are used as examples, which include nonconservative as well as relativistic systems. '... the most beautiful and important discovery of Mechanics.' Lagrange to Maupertuis (November 1756)

  1. Ten guiding principles for youth mental health services.

    Science.gov (United States)

    Hughes, Frank; Hebel, Lisa; Badcock, Paul; Parker, Alexandra G

    2018-06-01

    Guiding principles are arguably central to the development of any health service. The aim of this article is to report on the outcomes of a youth mental health (YMH) community of practice (CoP), which identified a range of guiding principles that provide a clear point of comparison for the only other set of principles for YMH service delivery proposed to date. A YMH CoP was established in 2010 as part of the Victorian State Government approach to improving YMH care. An initial literature search was undertaken to locate articles on YMH service delivery. A number of common themes were identified, which the YMH community of practice (YMHCoP) members then elaborated upon by drawing from their collective experience of the YMH sector. The resultant themes were then refined through subsequent group discussions to derive a definitive set of guiding principles. These principles were then augmented by a second literature search conducted in July 2015. Fifteen key themes were derived from the initial literature search and YMH CoP discussions. These were refined by the YMH CoP to produce 10 guiding principles for YMH service development. These are discussed through reference to the relevant literature, using the only other article on principles of YMH service delivery as a notable point of comparison. The 10 principles identified may be useful for quality improvement and are likely to have international relevance. We suggest the timely pursuit of an international consensus on guiding principles for service delivery under the auspices of a peak body for YMH. © 2017 John Wiley & Sons Australia, Ltd.

  2. The several faces of the cosmological principle

    Energy Technology Data Exchange (ETDEWEB)

    Beisbart, Claus [TU Dortmund (Germany). Fakultaet 14, Institut fuer Philosophie und Politikwissenschaft

    2010-07-01

    Much work in relativistic cosmology relies upon the cosmological principle. Very roughly, this principle has it hat the universe is spatially homogeneous and isotropic. However, if the principle is to do some work, it has to be rendered more precise. The aim of this talk is to show that such a precification significantly depends on the theoretical framework adopted and on its ontology. Moreover, it is shown that present-day cosmology uses the principle in different versions that do not fit together nicely. Whereas, in theoretical cosmology, the principle is spelt out as a requirement on space-time manifolds, observational cosmology cashes out the principle using the notion of a random process. I point out some philosophical problems that arise in this context. My conclusion is that the cosmological principle is not a very precise hypothesis, but rather a rough idea that has several faces in contemporary cosmology.

  3. Multiple solid-phase microextraction

    NARCIS (Netherlands)

    Koster, EHM; de Jong, GJ

    2000-01-01

    Theoretical aspects of multiple solid-phase microextraction are described and the principle is illustrated with the extraction of lidocaine from aqueous solutions. With multiple extraction under non-equilibrium conditions considerably less time is required in order to obtain an extraction yield that

  4. On the functional organization and operational principles of the motor cortex

    DEFF Research Database (Denmark)

    Capaday, Charles; Ethier, Christian; Van Vreeswijk, Carl

    2013-01-01

    of the movements evoked by activation of each point on its own. This operational principle may simplify the synthesis of motor commands. We will discuss two possible mechanisms that may explain linear summation of outputs. We have observed that the final posture of the arm when pointing to a given spatial location......Recent studies on the functional organization and operational principles of the motor cortex (MCx), taken together, strongly support the notion that the MCx controls the muscle synergies subserving movements in an integrated manner. For example, during pointing the shoulder, elbow and wrist muscles...... appear to be controlled as a coupled functional system, rather than singly and separately. The recurrent pattern of intrinsic synaptic connections between motor cortical points is likely part of the explanation for this operational principle. So too is the reduplicated, non-contiguous and intermingled...

  5. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Science.gov (United States)

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  6. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  7. Critical point predication device

    International Nuclear Information System (INIS)

    Matsumura, Kazuhiko; Kariyama, Koji.

    1996-01-01

    An operation for predicting a critical point by using a existent reverse multiplication method has been complicated, and an effective multiplication factor could not be plotted directly to degrade the accuracy for the prediction. The present invention comprises a detector counting memory section for memorizing the counting sent from a power detector which monitors the reactor power, a reverse multiplication factor calculation section for calculating the reverse multiplication factor based on initial countings and current countings of the power detector, and a critical point prediction section for predicting the criticality by the reverse multiplication method relative to effective multiplication factors corresponding to the state of the reactor core previously determined depending on the cases. In addition, a reactor core characteristic calculation section is added for analyzing an effective multiplication factor depending on the state of the reactor core. Then, if the margin up to the criticality is reduced to lower than a predetermined value during critical operation, an alarm is generated to stop the critical operation when generation of a period of more than a predetermined value predicted by succeeding critical operation. With such procedures, forecasting for the critical point can be easily predicted upon critical operation to greatly mitigate an operator's burden and improve handling for the operation. (N.H.)

  8. The precautionary principle in international environmental law and international jurisprudence

    OpenAIRE

    Tubić, Bojan

    2014-01-01

    This paper analysis international regulation of the precautionary principle as one of environmental principles. This principle envisages that when there are threats of serious and irreparable harm, as a consequence of certain economic activity, the lack of scientific evidence and full certainty cannot be used as a reason for postponing efficient measures for preventing environmental harm. From economic point of view, the application of precautionary principle is problematic, because it create...

  9. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...

  10. On the correspondence between quantum and classical variational principles

    International Nuclear Information System (INIS)

    Ruiz, D.E.; Dodin, I.Y.

    2015-01-01

    Classical variational principles can be deduced from quantum variational principles via formal reparameterization of the latter. It is shown that such reparameterization is possible without invoking any assumptions other than classicality and without appealing to dynamical equations. As examples, first principle variational formulations of classical point-particle and cold-fluid motion are derived from their quantum counterparts for Schrödinger, Pauli, and Klein–Gordon particles

  11. Effect of multiple circular holes Fraunhofer diffraction for the infrared optical imaging

    Science.gov (United States)

    Lu, Chunlian; Lv, He; Cao, Yang; Cai, Zhisong; Tan, Xiaojun

    2014-11-01

    With the development of infrared optics, infrared optical imaging systems play an increasingly important role in modern optical imaging systems. Infrared optical imaging is used in industry, agriculture, medical, military and transportation. But in terms of infrared optical imaging systems which are exposed for a long time, some contaminations will affect the infrared optical imaging. When the contamination contaminate on the lens surface of the optical system, it would affect diffraction. The lens can be seen as complementary multiple circular holes screen happen Fraunhofer diffraction. According to Babinet principle, you can get the diffraction of the imaging system. Therefore, by studying the multiple circular holes Fraunhofer diffraction, conclusions can be drawn about the effect of infrared imaging. This paper mainly studies the effect of multiple circular holes Fraunhofer diffraction for the optical imaging. Firstly, we introduce the theory of Fraunhofer diffraction and Point Spread Function. Point Spread Function is a basic tool to evaluate the image quality of the optical system. Fraunhofer diffraction will affect Point Spread Function. Then, the results of multiple circular holes Fraunhofer diffraction are given for different hole size and hole spacing. We choose the hole size from 0.1mm to 1mm and hole spacing from 0.3mm to 0.8mm. The infrared wavebands of optical imaging are chosen from 1μm to 5μm. We use the MATLAB to simulate light intensity distribution of multiple circular holes Fraunhofer diffraction. Finally, three-dimensional diffraction maps of light intensity are given to contrast.

  12. Acid dew point measurement in flue gases

    Energy Technology Data Exchange (ETDEWEB)

    Struschka, M.; Baumbach, G.

    1986-06-01

    The operation of modern boiler plants requires the continuous measurement of the acid dew point in flue gases. An existing measuring instrument was modified in such a way that it can determine acid dew points reliably, reproduceably and continuously. The authors present the mechanisms of the dew point formation, the dew point measuring principle, the modification and the operational results.

  13. Multiplicity in difference geometry

    OpenAIRE

    Tomasic, Ivan

    2011-01-01

    We prove a first principle of preservation of multiplicity in difference geometry, paving the way for the development of a more general intersection theory. In particular, the fibres of a \\sigma-finite morphism between difference curves are all of the same size, when counted with correct multiplicities.

  14. Under digital fluoroscopic guidance multiple-point injection with absolute alcohol and pinyangmycin for the treatment of superficial venous malformations

    International Nuclear Information System (INIS)

    Yang Ming; Xiao Gang; Peng Youlin

    2010-01-01

    Objective: to investigate the therapeutic efficacy of multiple-point injection with absolute alcohol and pinyangmycin under digital fluoroscopic guidance for superficial venous malformations. Methods: By using a disposal venous transfusion needle the superficial venous malformation was punctured and then contrast media lohexol was injected in to visualize the tumor body, which was followed by the injection of ethanol and pinyangmycin when the needle was confirmed in the correct position. The procedure was successfully performed in 31 patients. The clinical results were observed and analyzed. Results: After one treatment complete cure was achieved in 21 cases and marked effect was obtained in 8 cases, with a total effectiveness of 93.5%. Conclusion: Multiple-point injection with ethanol and pinyangmycin under digital fluoroscopic guidance is an effective and safe technique for the treatment of superficial venous malformations, especially for the lesions that are deeply located and ill-defined. (authors)

  15. Structuring Principles for the Designer

    DEFF Research Database (Denmark)

    Miller, Thomas Dedenroth; Pedersen, Per Erik Elgård

    1998-01-01

    This paper suggests a list of structuring principles that support the designer in making alternative concepts for product architectures. Different architectures may support different points of diversification in the product life-cycle. The aim is to balance reuse of resources and reduction...

  16. A Principle of Intentionality.

    Science.gov (United States)

    Turner, Charles K

    2017-01-01

    The mainstream theories and models of the physical sciences, including neuroscience, are all consistent with the principle of causality. Wholly causal explanations make sense of how things go, but are inherently value-neutral, providing no objective basis for true beliefs being better than false beliefs, nor for it being better to intend wisely than foolishly. Dennett (1987) makes a related point in calling the brain a syntactic (procedure-based) engine. He says that you cannot get to a semantic (meaning-based) engine from there. He suggests that folk psychology revolves around an intentional stance that is independent of the causal theories of the brain, and accounts for constructs such as meanings, agency, true belief, and wise desire. Dennett proposes that the intentional stance is so powerful that it can be developed into a valid intentional theory. This article expands Dennett's model into a principle of intentionality that revolves around the construct of objective wisdom. This principle provides a structure that can account for all mental processes, and for the scientific understanding of objective value. It is suggested that science can develop a far more complete worldview with a combination of the principles of causality and intentionality than would be possible with scientific theories that are consistent with the principle of causality alone.

  17. Principle-theoretic approach of kondo and construction-theoretic formalism of gauge theories

    International Nuclear Information System (INIS)

    Jain, L.C.

    1986-01-01

    Einstein classified various theories in physics as principle-theories and constructive-theories. In this lecture Kondo's approach to microscopic and macroscopic phenomena is analysed for its principle theoretic pursuit as followed by construction. The fundamentals of his theory may be recalled as Tristimulus principle, Observation principle, Kawaguchi spaces, empirical information, epistemological point of view, unitarity, intrinsicality, and dimensional analysis subject to logical and geometrical achievement. On the other hand, various physicists have evolved constructive gauge theories through the phenomenological point of view, often a collective one. Their synthetic method involves fibre bundles and connections, path integrals as well as other hypothetical structures. They lead towards clarity, completeness and adaptability

  18. a Point-Like Picture of the Hydrogen Atom

    Science.gov (United States)

    Faghihi, F.; Jangjoo, A.; Khani, M.

    A point-like picture of the Schrödinger solution for hydrogen atom is worked to emphasize that "point-like particles" may describe as "probability wave function". In each case, the three-dimensional shape of the |Ψnlm(rn, cosθ)|2 is plotted and the paths of the point-like electron (it is better to say reduced mass of the pair particles) are described in each closed shell. Finally, the orbital shape of the molecules are given according to the present simple model. In our opinion, "interpretations of the Correspondence Principle", which is a basic principle in all elementary quantum text, seems to be reviewed again!

  19. Dew point measurement technique utilizing fiber cut reflection

    Science.gov (United States)

    Kostritskii, S. M.; Dikevich, A. A.; Korkishko, Yu. N.; Fedorov, V. A.

    2009-05-01

    The fiber optical dew point hygrometer based on change of reflection coefficient for fiber cut has been developed and examined. We proposed and verified the model of condensation detector functioning principle. Experimental frost point measurements on air with different frost points have been performed.

  20. Comments on 'On a proposed new test of Heisenberg's principle'

    International Nuclear Information System (INIS)

    Home, D.; Sengupta, S.

    1981-01-01

    A logical fallacy is pointed out in Robinson's analysis (J. Phys. A.; 13:877 (1980)) of a thought experiment purporting to show violation of Heisenberg's uncertainty principle. The real problem concerning the interpretation of Heisenberg's principle is precisely stated. (author)

  1. Comments on field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1987-01-01

    It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...

  2. Surface Tension of Multi-phase Flow with Multiple Junctions Governed by the Variational Principle

    International Nuclear Information System (INIS)

    Matsutani, Shigeki; Nakano, Kota; Shinjo, Katsuhiko

    2011-01-01

    We explore a computational model of an incompressible fluid with a multi-phase field in three-dimensional Euclidean space. By investigating an incompressible fluid with a two-phase field geometrically, we reformulate the expression of the surface tension for the two-phase field found by Lafaurie et al. (J Comput Phys 113:134–147, 1994) as a variational problem related to an infinite dimensional Lie group, the volume-preserving diffeomorphism. The variational principle to the action integral with the surface energy reproduces their Euler equation of the two-phase field with the surface tension. Since the surface energy of multiple interfaces even with singularities is not difficult to be evaluated in general and the variational formulation works for every action integral, the new formulation enables us to extend their expression to that of a multi-phase (N-phase, N ≥ 2) flow and to obtain a novel Euler equation with the surface tension of the multi-phase field. The obtained Euler equation governs the equation for motion of the multi-phase field with different surface tension coefficients without any difficulties for the singularities at multiple junctions. In other words, we unify the theory of multi-phase fields which express low dimensional interface geometry and the theory of the incompressible fluid dynamics on the infinite dimensional geometry as a variational problem. We apply the equation to the contact angle problems at triple junctions. We computed the fluid dynamics for a two-phase field with a wall numerically and show the numerical computational results that for given surface tension coefficients, the contact angles are generated by the surface tension as results of balances of the kinematic energy and the surface energy.

  3. First-principles investigation of the energetics of point defects at a grain boundary in tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Jun; Li, Yu-Hao; Niu, Liang-Liang; Qin, Shi-Yao; Zhou, Hong-Bo, E-mail: hbzhou@buaa.edu.cn; Jin, Shuo; Zhang, Ying; Lu, Guang-Hong

    2017-02-15

    Tungsten (W) and W alloys are considered as the most promising candidates for plasma facing materials in future fusion reactor. Grain boundaries (GBs) play an important role in the self-healing of irradiation defects in W. Here, we investigate the stability of point defects [vacancy and self-interstitial atoms (SIA’s)] in a Σ5(3 1 0) [0 0 1] tilt W GB by calculating the energetics using a first-principles method. It is found that both the vacancy and SIA are energetically favorable to locate at neighboring sites of the GB, suggesting the vacancy and SIA can easily segregate to the GB region with the segregation energy of 1.53 eV and 7.5 eV, respectively. This can be attributed to the special atomic configuration and large available space of the GB. The effective interaction distance between the GB and the SIA is ∼6.19 Å, which is ∼2 Å larger than that of the vacancy-GB, indicating the SIA are more preferable to locate at the GB in comparison with the vacancy. Further, the binding energy of di-vacancies in the W GB are much larger than that in bulk W, suggesting that the vacancy energetically prefers to congregate in the GB.

  4. Scalets, wavelets and (complex) turning point quantization

    Science.gov (United States)

    Handy, C. R.; Brooks, H. A.

    2001-05-01

    Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.

  5. Principled Missing Data Treatments.

    Science.gov (United States)

    Lang, Kyle M; Little, Todd D

    2018-04-01

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  6. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  7. Music-evoked emotions: principles, brain correlates, and implications for therapy.

    Science.gov (United States)

    Koelsch, Stefan

    2015-03-01

    This paper describes principles underlying the evocation of emotion with music: evaluation, resonance, memory, expectancy/tension, imagination, understanding, and social functions. Each of these principles includes several subprinciples, and the framework on music-evoked emotions emerging from these principles and subprinciples is supposed to provide a starting point for a systematic, coherent, and comprehensive theory on music-evoked emotions that considers both reception and production of music, as well as the relevance of emotion-evoking principles for music therapy. © 2015 New York Academy of Sciences.

  8. Reformulation of a stochastic action principle for irregular dynamics

    International Nuclear Information System (INIS)

    Wang, Q.A.; Bangoup, S.; Dzangue, F.; Jeatsa, A.; Tsobnang, F.; Le Mehaute, A.

    2009-01-01

    A stochastic action principle for random dynamics is revisited. Numerical diffusion experiments are carried out to show that the diffusion path probability depends exponentially on the Lagrangian action A=∫ a b Ldt. This result is then used to derive the Shannon measure for path uncertainty. It is shown that the maximum entropy principle and the least action principle of classical mechanics can be unified into δA-bar=0 where the average is calculated over all possible paths of the stochastic motion between two configuration points a and b. It is argued that this action principle and the maximum entropy principle are a consequence of the mechanical equilibrium condition extended to the case of stochastic dynamics.

  9. Babinet's principle in double-refraction systems

    Science.gov (United States)

    Ropars, Guy; Le Floch, Albert

    2014-06-01

    Babinet's principle applied to systems with double refraction is shown to involve spatial interchanges between the ordinary and extraordinary patterns observed through two complementary screens. As in the case of metamaterials, the extraordinary beam does not follow the Snell-Descartes refraction law, the superposition principle has to be applied simultaneously at two points. Surprisingly, by contrast to the intuitive impression, in the presence of the screen with an opaque region, we observe that the emerging extraordinary photon pattern, which however has undergone a deviation, remains fixed when a natural birefringent crystal is rotated while the ordinary one rotates with the crystal. The twofold application of Babinet's principle implies intensity and polarization interchanges but also spatial and dynamic interchanges which should occur in birefringent metamaterials.

  10. The principle of proportionality revisited: interpretations and applications.

    Science.gov (United States)

    Hermerén, Göran

    2012-11-01

    The principle of proportionality is used in many different contexts. Some of these uses and contexts are first briefly indicated. This paper focusses on the use of this principle as a moral principle. I argue that under certain conditions the principle of proportionality is helpful as a guide in decision-making. But it needs to be clarified and to be used with some flexibility as a context-dependent principle. Several interpretations of the principle are distinguished, using three conditions as a starting point: importance of objective, relevance of means, and most favourable option. The principle is then tested against an example, which suggests that a fourth condition, focusing on non-excessiveness, needs to be added. I will distinguish between three main interpretations of the principle, some primarily with uses in research ethics, others with uses in other areas of bioethics, for instance in comparisons of therapeutic means and ends. The relations between the principle of proportionality and the precautionary principle are explored in the following section. It is concluded that the principles are different and may even clash. In the next section the principle of proportionality is applied to some medical examples drawn from research ethics and bioethics. In concluding, the status of the principle of proportionality as a moral principle is discussed. What has been achieved so far and what remains to be done is finally summarized.

  11. THE PRINCIPLES OF LAW. PHILOSOPHICAL APPROACH

    Directory of Open Access Journals (Sweden)

    MARIUS ANDREESCU

    2013-05-01

    Full Text Available Any scientific intercession that has as objective, the understanding of the significances of the “principle of law” needs to have an interdisciplinary character, the basis for the approach being the philosophy of the law. In this study we fulfill such an analysis with the purpose to underline the multiple theoretical significances due to this concept, but also the relationship between the juridical principles and norms, respectively the normative value of the principle of the law. Thus are being materialized extensive references to the philosophical and juridical doctrine in the matter. This study is a pleading to refer to the principles, in the work for the law’s creation and applying. Starting with the difference between “given” and ‘constructed” we propose the distinction between the “metaphysical principles” outside the law, which by their contents have philosophical significances, and the “constructed principles” elaborated inside the law. We emphasize the obligation of the law maker, but also of the expert to refer to the principles in the work of legislation, interpretation and applying of the law. Arguments are brought for the updating, in certain limits, the justice – naturalistic concepts in the law.

  12. An Improved Quantum-Behaved Particle Swarm Optimization Method for Economic Dispatch Problems with Multiple Fuel Options and Valve-Points Effects

    Directory of Open Access Journals (Sweden)

    Hong-Yun Zhang

    2012-09-01

    Full Text Available Quantum-behaved particle swarm optimization (QPSO is an efficient and powerful population-based optimization technique, which is inspired by the conventional particle swarm optimization (PSO and quantum mechanics theories. In this paper, an improved QPSO named SQPSO is proposed, which combines QPSO with a selective probability operator to solve the economic dispatch (ED problems with valve-point effects and multiple fuel options. To show the performance of the proposed SQPSO, it is tested on five standard benchmark functions and two ED benchmark problems, including a 40-unit ED problem with valve-point effects and a 10-unit ED problem with multiple fuel options. The results are compared with differential evolution (DE, particle swarm optimization (PSO and basic QPSO, as well as a number of other methods reported in the literature in terms of solution quality, convergence speed and robustness. The simulation results confirm that the proposed SQPSO is effective and reliable for both function optimization and ED problems.

  13. Optimizing the diagnostic power with gastric emptying scintigraphy at multiple time points

    Directory of Open Access Journals (Sweden)

    Gajewski Byron J

    2011-05-01

    Full Text Available Abstract Background Gastric Emptying Scintigraphy (GES at intervals over 4 hours after a standardized radio-labeled meal is commonly regarded as the gold standard for diagnosing gastroparesis. The objectives of this study were: 1 to investigate the best time point and the best combination of multiple time points for diagnosing gastroparesis with repeated GES measures, and 2 to contrast and cross-validate Fisher's Linear Discriminant Analysis (LDA, a rank based Distribution Free (DF approach, and the Classification And Regression Tree (CART model. Methods A total of 320 patients with GES measures at 1, 2, 3, and 4 hour (h after a standard meal using a standardized method were retrospectively collected. Area under the Receiver Operating Characteristic (ROC curve and the rate of false classification through jackknife cross-validation were used for model comparison. Results Due to strong correlation and an abnormality in data distribution, no substantial improvement in diagnostic power was found with the best linear combination by LDA approach even with data transformation. With DF method, the linear combination of 4-h and 3-h increased the Area Under the Curve (AUC and decreased the number of false classifications (0.87; 15.0% over individual time points (0.83, 0.82; 15.6%, 25.3%, for 4-h and 3-h, respectively at a higher sensitivity level (sensitivity = 0.9. The CART model using 4 hourly GES measurements along with patient's age was the most accurate diagnostic tool (AUC = 0.88, false classification = 13.8%. Patients having a 4-h gastric retention value >10% were 5 times more likely to have gastroparesis (179/207 = 86.5% than those with ≤10% (18/113 = 15.9%. Conclusions With a mixed group of patients either referred with suspected gastroparesis or investigated for other reasons, the CART model is more robust than the LDA and DF approaches, capable of accommodating covariate effects and can be generalized for cross institutional applications, but

  14. Fashion, Paper Dolls and Multiplicatives

    Science.gov (United States)

    Ura, Suzana Kaori; Stein-Barana, Alzira C. M.; Munhoz, Deisy P.

    2011-01-01

    The multiplicative principle is the tool allowing the counting of groups that can be described by a sequence of events. An event is a subset of sample space, i.e. a collection of possible outcomes, which may be equal to or smaller than the sample space as a whole. It is important that students understand this basic principle early on and know how…

  15. Quantum mechanics and the equivalence principle

    International Nuclear Information System (INIS)

    Davies, P C W

    2004-01-01

    A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics

  16. Enhancing the Therapy Experience Using Principles of Video Game Design.

    Science.gov (United States)

    Folkins, John Wm; Brackenbury, Tim; Krause, Miriam; Haviland, Allison

    2016-02-01

    This article considers the potential benefits that applying design principles from contemporary video games may have on enhancing therapy experiences. Six principles of video game design are presented, and their relevance for enriching clinical experiences is discussed. The motivational and learning benefits of each design principle have been discussed in the education literature as having positive impacts on student motivation and learning and are related here to aspects of clinical practice. The essential experience principle suggests connecting all aspects of the experience around a central emotion or cognitive connection. The discovery principle promotes indirect learning in focused environments. The risk-taking principle addresses the uncertainties clients face when attempting newly learned skills in novel situations. The generalization principle encourages multiple opportunities for skill transfer. The reward system principle directly relates to the scaffolding of frequent and varied feedback in treatment. Last, the identity principle can assist clients in using their newly learned communication skills to redefine self-perceptions. These principles highlight areas for research and interventions that may be used to reinforce or advance current practice.

  17. Acid dew point measurements in combustion gases using the dew point measuring system AH 85100

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1984-01-01

    Measuring system for continuous monitoring of the SO/sub 2//SO/sub 3/ dew point in the flue gas, characterized by a low failure rate, applicability inside the flue gas duct, maintenance-free continuous operation, and self-cleaning. The measuring principle is the cooling of the sensor element down to the 'onset condensation' message. Sensor surface temperatures are listed and evaluated as flue gas dew point temperatures. The measuring system is described. (DOMA).

  18. Clinical teachers' tacit knowledge of basic pedagogic principles.

    Science.gov (United States)

    McLeod, P J; Meagher, T; Steinert, Y; Schuwirth, L; McLeod, A H

    2004-02-01

    Academic faculty members in medical schools rarely receive formal instruction in basic pedagogic principles; nevertheless many develop into competent teachers. Perhaps they acquire tacit knowledge of these principles with teaching experience. This study was designed to assess clinical teachers' tacit knowledge of basic pedagogic principles and concepts. The authors developed a multiple-choice question (MCQ) exam based on 20 pedagogic principles judged by a panel of education experts to be important for clinical teaching. Three groups of clinician-educators sat the test: (1) clinicians with advanced education training and experience; (2) internal medicine specialists; (3) surgical specialists. All four groups of clinicians-educators passed the test, indicating that they possess a reasonable tacit knowledge of basic pedagogic principles. Those with advanced education training performed much better than members of the other two groups while specialists and residents working in teaching hospitals outperformed specialists from non-teaching hospitals. It is possible that converting this tacit knowledge to explicit knowledge may improve individual teaching effectiveness.

  19. Systems near a critical point under multiplicative noise and the concept of effective potential

    Science.gov (United States)

    Shapiro, V. E.

    1993-07-01

    This paper presents a general approach to and elucidates the main features of the effective potential, friction, and diffusion exerted by systems near a critical point due to nonlinear influence of noise. The model is that of a general many-dimensional system of coupled nonlinear oscillators of finite damping under frequently alternating influences, multiplicative or additive, and arbitrary form of the power spectrum, provided the time scales of the system's drift due to noise are large compared to the scales of unperturbed relaxation behavior. The conventional statistical approach and the widespread deterministic effective potential concept use the assumptions about a small parameter which are particular cases of the considered. We show close correspondence between the asymptotic methods of these approaches and base the analysis on this. The results include an analytical treatment of the system's long-time behavior as a function of the noise covering all the range of its table- and bell-shaped spectra, from the monochromatic limit to white noise. The trend is considered both in the coordinate momentum and in the coordinate system's space. Particular attention is paid to the stabilization behavior forced by multiplicative noise. An intermittency, in a broad area of the control parameter space, is shown to be an intrinsic feature of these phenomena.

  20. Cosmological implications of Heisenberg's principle

    CERN Document Server

    Gonzalo, Julio A

    2015-01-01

    The aim of this book is to analyze the all important implications of Heisenberg's Uncertainty Principle for a finite universe with very large mass-energy content such as ours. The earlier and main contributors to the formulation of Quantum Mechanics are briefly reviewed regarding the formulation of Heisenberg's Principle. After discussing “indeterminacy” versus ”uncertainty”, the universal constants of physics are reviewed and Planck's units are given. Next, a novel set of units, Heisenberg–Lemaitre units, are defined in terms of the large finite mass of the universe. With the help of Heisenberg's principle, the time evolution of the finite zero-point energy for the universe is investigated quantitatively. Next, taking advantage of the rigorous solutions of Einstein's cosmological equation for a flat, open and mixed universe of finite mass, the most recent and accurate data on the “age” (to) and the expansion rate (Ho) of the universe and their implications are reconsidered.

  1. Federal High Performance and Sustainable Buildings: Guiding Principles for the Laboratory Support Building (LSB)

    Energy Technology Data Exchange (ETDEWEB)

    Pope, Jason E.

    2014-09-01

    This report documents the federal Guiding Principles conformance effort for LSB at PNNL. The effort is part of continued progress toward a campus building inventory that is 100% compliant with the Guiding Principles. The report documentation provides a narrative of how the LSB complies with each of the Guiding Principles requirements. These narratives draw from the many sources that are explained in the text and rely on extensive data collection. The descriptions point to each of these sources, providing the reader with specific policies, procedures, and data points.

  2. "Essential Principles of Economics:" A Hypermedia Textbook.

    Science.gov (United States)

    McCain, Roger A.

    2000-01-01

    Discusses an electronic textbook called "Essential Principles of Economics." Explains that economic concepts are found by following links from the table of contents, while each chapter includes both expository information and interactive material including online multiple-choice drill questions. States that the textbook is a "work…

  3. Principle of progressive (gradual use of contractual remedies

    Directory of Open Access Journals (Sweden)

    Bazil OGLINDĂ

    2014-12-01

    Full Text Available In this study, we intend to answer to the question whether, in the modern contract law, in general, and in Romanian contract law, in particular, the creditor may resort almost discretionary to remedies (contractual sanctions such as termination, rescission without being opposed that he should have resorted to other more appropriate remedies. In order to answer to this question, we find it extremely useful to define the term of contractual remedy and to analyse the correlation of this principle with other principles of modern contract law. Also, last but not least, we intend to define the principle of progressive (gradual use of the contractual remedies and to detail the vocation (legal nature of this principle in the modern contract law, having as starting point the provisions of the new Romanian Civil Code.

  4. How Many Principles for Public Health Ethics?

    Science.gov (United States)

    Coughlin, Steven S.

    2009-01-01

    General moral (ethical) principles play a prominent role in certain methods of moral reasoning and ethical decision-making in bioethics and public health. Examples include the principles of respect for autonomy, beneficence, nonmaleficence, and justice. Some accounts of ethics in public health have pointed to additional principles related to social and environmental concerns, such as the precautionary principle and principles of solidarity or social cohesion. This article provides an overview of principle-based methods of moral reasoning as they apply to public health ethics including a summary of advantages and disadvantages of methods of moral reasoning that rely upon general principles of moral reasoning. Drawing upon the literature on public health ethics, examples are provided of additional principles, obligations, and rules that may be useful for analyzing complex ethical issues in public health. A framework is outlined that takes into consideration the interplay of ethical principles and rules at individual, community, national, and global levels. Concepts such as the precautionary principle and solidarity are shown to be useful to public health ethics to the extent that they can be shown to provide worthwhile guidance and information above and beyond principles of beneficence, nonmaleficence, and justice, and the clusters of rules and maxims that are linked to these moral principles. Future directions likely to be productive include further work on areas of public health ethics such as public trust, community empowerment, the rights of individuals who are targeted (or not targeted) by public health interventions, individual and community resilience and wellbeing, and further clarification of principles, obligations, and rules in public health disciplines such as environmental science, prevention and control of chronic and infectious diseases, genomics, and global health. PMID:20072707

  5. Fundamental principles of nanostructures and multiple exciton generation effect in quantum dots

    International Nuclear Information System (INIS)

    Turaeva, N.; Oksengendler, B.; Rashidova, S.

    2011-01-01

    In this work the theoretical aspects of the effect of multiple exciton generation in QDs has been studied. The statistic theory of multiple exciton generation in quantum dots is presented based on the Fermi approach to the problem of multiple generation of elementary particles at nucleon-nucleon collisions. Our calculations show that the quantum efficiencies of multiple exciton generation in various quantum dots at absorption of single photon are in a good agreement with the experimental data. The microscopic mechanism of this effect is based on the theory of electronic 'shaking'. In the work the deviation of averaged multiplicity of MEG effect from the Poisson law of fluctuations has been investigated. Besides, the role of interface electronic states of quantum dot and ligand has been considered by means of quantum mechanics. The size optimization of quantum dot has been arranged to receive the maximum multiplicity of MEG effect. (authors)

  6. Design and Integration of an All-Magnetic Attitude Control System for FASTSAT-HSV01's Multiple Pointing Objectives

    Science.gov (United States)

    DeKock, Brandon; Sanders, Devon; Vanzwieten, Tannen; Capo-Lugo, Pedro

    2011-01-01

    The FASTSAT-HSV01 spacecraft is a microsatellite with magnetic torque rods as it sole attitude control actuator. FASTSAT s multiple payloads and mission functions require the Attitude Control System (ACS) to maintain Local Vertical Local Horizontal (LVLH)-referenced attitudes without spin-stabilization, while the pointing errors for some attitudes be significantly smaller than the previous best-demonstrated for this type of control system. The mission requires the ACS to hold multiple stable, unstable, and non-equilibrium attitudes, as well as eject a 3U CubeSat from an onboard P-POD and recover from the ensuing tumble. This paper describes the Attitude Control System, the reasons for design choices, how the ACS integrates with the rest of the spacecraft, and gives recommendations for potential future applications of the work.

  7. Indeterminacy and the principle of need.

    Science.gov (United States)

    Herlitz, Anders

    2017-02-01

    The principle of need-the idea that resources should be allocated according to need-is often invoked in priority setting in the health care sector. In this article, I argue that a reasonable principle of need must be indeterminate, and examine three different ways that this can be dealt with: appendicizing the principle with further principles, imposing determinacy, or empowering decision makers. I argue that need must be conceptualized as a composite property composed of at least two factors: health shortfall and capacity to benefit. When one examines how the different factors relate to each other, one discovers that this is sometimes indeterminate. I illustrate this indeterminacy in this article by applying the small improvement argument. If the relation between the factors are always determinate, the comparative relation changes by a small adjustment. Yet, if two needs are dissimilar but of seemingly equal magnitude, the comparative relation does not change by a small adjustment of one of the factors. I then outline arguments in favor of each of the three strategies for dealing with indeterminacy, but also point out that all strategies have significant shortcomings. More research is needed concerning how to deal with this indeterminacy, and the most promising path seems to be to scrutinize the position of the principle of need among a plurality of relevant principles for priority setting in the health care sector.

  8. Gauge principle for hyper(para) fields

    Energy Technology Data Exchange (ETDEWEB)

    Govorkov, A.B. (Joint Inst. for Nuclear Research, Dubna (USSR))

    1983-04-01

    A special representation for parafields is considered which is based on the use of the Clifford hypernumbers. The principle of gauge invariance under hypercomplex phase transformations of parafields is formulated. A special role of quaternion hyperfields and corresponding Yang-Mills lagrangian with the gauge SO(3)-symmetry is pointed out.

  9. Reflector automatic acquisition and pointing based on auto-collimation theodolite

    Science.gov (United States)

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  10. The Andrews’ Principles of Risk, Need, and Responsivity as Applied in Drug Abuse Treatment Programs: Meta-Analysis of Crime and Drug Use Outcomes

    Science.gov (United States)

    Prendergast, Michael L.; Pearson, Frank S.; Podus, Deborah; Hamilton, Zachary K.; Greenwell, Lisa

    2013-01-01

    Objectives The purpose of the present meta-analysis was to answer the question: Can the Andrews principles of risk, needs, and responsivity, originally developed for programs that treat offenders, be extended to programs that treat drug abusers? Methods Drawing from a dataset that included 243 independent comparisons, we conducted random-effects meta-regression and ANOVA-analog meta-analyses to test the Andrews principles by averaging crime and drug use outcomes over a diverse set of programs for drug abuse problems. Results For crime outcomes, in the meta-regressions the point estimates for each of the principles were substantial, consistent with previous studies of the Andrews principles. There was also a substantial point estimate for programs exhibiting a greater number of the principles. However, almost all of the 95% confidence intervals included the zero point. For drug use outcomes, in the meta-regressions the point estimates for each of the principles was approximately zero; however, the point estimate for programs exhibiting a greater number of the principles was somewhat positive. All of the estimates for the drug use principles had confidence intervals that included the zero point. Conclusions This study supports previous findings from primary research studies targeting the Andrews principles that those principles are effective in reducing crime outcomes, here in meta-analytic research focused on drug treatment programs. By contrast, programs that follow the principles appear to have very little effect on drug use outcomes. Primary research studies that experimentally test the Andrews principles in drug treatment programs are recommended. PMID:24058325

  11. Greatest Happiness Principle in a Complex System: Maximisation versus Driving Force

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2012-06-01

    Full Text Available From philosophical point of view, micro-founded economic theories depart from the principle of the pursuit of the greatest happiness. From mathematical point of view, micro-founded economic theories depart from the utility maximisation program. Though economists are aware of the serious limitations of the equilibrium analysis, they remain in that framework. We show that the maximisation principle, which implies the equilibrium hypothesis, is responsible for this impasse. We formalise the pursuit of the greatest happiness principle by the help of the driving force postulate: the volumes of activities depend on the expected wealth increase. In that case we can get rid of the equilibrium hypothesis and have new insights into economic theory. For example, in what extent standard economic results depend on the equilibrium hypothesis?

  12. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  13. A multiplicity logic unit

    International Nuclear Information System (INIS)

    Bialkowski, J.; Moszynski, M.; Zagorski, A.

    1981-01-01

    The logic diagram principle of operation and some details of the design of the multiplicity logic unit are presented. This unit was specially designed to fulfil the requirements of a multidetector arrangement for gamma-ray multiplicity measurements. The unit is equipped with 16 inputs controlled by a common coincidence gate. It delivers a linear output pulse with the height proportional to the multiplicity of coincidences and logic pulses corresponding to 0, 1, ... up to >= 5-fold coincidences. These last outputs are used to steer the routing unit working with the multichannel analyser. (orig.)

  14. A first-principles approach to finite temperature elastic constants

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K [Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA 16802 (United States)

    2010-06-09

    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni{sub 3}Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  15. A first-principles approach to finite temperature elastic constants

    International Nuclear Information System (INIS)

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K

    2010-01-01

    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni 3 Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  16. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    International Nuclear Information System (INIS)

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-01-01

    Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data

  17. Matter tensor from the Hilbert variational principle

    International Nuclear Information System (INIS)

    Pandres, D. Jr.

    1976-01-01

    We consider the Hilbert variational principle which is conventionally used to derive Einstein's equations for the source-free gravitational field. We show that at least one version of the equivalence principle suggests an alternative way of performing the variation, resulting in a different set of Einstein equations with sources automatically present. This illustrates a technique which may be applied to any theory that is derived from a variational principle and that admits a gauge group. The essential point is that, if one first imposes a gauge condition and then performs the variation, one obtains field equations with source terms which do not appear if one first performs the variation and then imposes the gauge condition. A second illustration is provided by the variational principle conventionally used to derive Maxwell's equations for the source-free electromagnetic field. If one first imposes the Lorentz gauge condition and then performs the variation, one obtains Maxwell's equations with sources present

  18. A new principle for an all-digital preamplifier and equalizer

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1987-01-01

    A new principle for an all-digital preamplifier and equalizer, to be used together with a Compact Disc player, is described. The principle makes it possible to obtain an arbitrary gain transfer function together with a linear phase. The gain can be varied 20 dB from point to point, when specified...... on a logarithmic frequency axis with 30 divisions from 20 Hz to 20 kHz. The deviation in the passband is a maximum of 0.3 dB. Taking advantage of the digital signal from the preamplifier, a high-efficiency power amplifier can be developed. A prototype of the preamplifier built with commercially obtainable...

  19. A New principle for an all digital preamplifier and equalizer

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1986-01-01

    A new principle for an all digital preamplifier and equalizer, to be used together with a compact disc player, is described. The principle makes it possible to obtain an arbitrary gain transfer function together with a linear phase. The gain can be varied 20 dB from point to point, when specified...... on a logarithmic frequency axis with 30 divisions from 20 Hz to 20 kHz. The deviation in the passbands is max. 0. 2 dB. Taking advantage of the digital signal from the preamplifier, a high-efficiency power amplifier can be developed. A prototype of the preamplifier built with commercially obtainable components has...

  20. On quasistability radius of a vector trajectorial problem with a principle of optimality generalizing Pareto and lexicographic principles

    Directory of Open Access Journals (Sweden)

    Sergey E. Bukhtoyarov

    2005-05-01

    Full Text Available A multicriterion linear combinatorial problem with a parametric principle of optimality is considered. This principle is defined by a partitioning of partial criteria onto Pareto preference relation groups within each group and the lexicographic preference relation between them. Quasistability of the problem is investigated. This type of stability is a discrete analog of Hausdorff lower semi-continuity of the multiple-valued mapping that defines the choice function. A formula of quasistability radius is derived for the case of the metric l∞. Some known results are stated as corollaries. Mathematics Subject Classification 2000: 90C05, 90C10, 90C29, 90C31.

  1. Track formation. Principles and applications

    International Nuclear Information System (INIS)

    Monnin, M.

    1978-01-01

    The principles and technical aspects of track formation in insulating solids are first described. The characteristics of dialectic track detection are discussed from the technical point of view: the nature of the detectors, the chemical treatment, the sensitivity and the environmental conditions of use. The applications are reviewed. The principle of each type of applied research is described and then the applications are listed. When used as a detector, nuclear tracks can provide valuable information in a number of fields: element content determination and wrapping, imaging, radiation dosimetry, environmental studies, technological uses and miscellaneous other applications. The track-formation process can also be used for making well-defined holes; this method allows other applications which are also described. Finally, some possible future applications are mentioned. (author)

  2. Screening of point mutations by multiple SSCP analysis in the dystrophin gene

    Energy Technology Data Exchange (ETDEWEB)

    Lasa, A.; Baiget, M.; Gallano, P. [Hospital Sant Pau, Barcelona (Spain)

    1994-09-01

    Duchenne muscular dystrophy (DMD) is a lethal, X-linked neuromuscular disorder. The population frequency of DMD is one in approximately 3500 boys, of which one third is thought to be a new mutant. The DMD gene is the largest known to date, spanning over 2,3 Mb in band Xp21.2; 79 exons are transcribed into a 14 Kb mRNA coding for a protein of 427 kD which has been named dystrophin. It has been shown that about 65% of affected boys have a gene deletion with a wide variation in localization and size. The remaining affected individuals who have no detectable deletions or duplications would probably carry more subtle mutations that are difficult to detect. These mutations occur in several different exons and seem to be unique to single patients. Their identification represents a formidable goal because of the large size and complexity of the dystrophin gene. SSCP is a very efficient method for the detection of point mutations if the parameters that affect the separation of the strands are optimized for a particular DNA fragment. The multiple SSCP allows the simultaneous study of several exons, and implies the use of different conditions because no single set of conditions will be optimal for all fragments. Seventy-eight DMD patients with no deletion or duplication in the dystrophin gene were selected for the multiple SSCP analysis. Genomic DNA from these patients was amplified using the primers described for the diagnosis procedure (muscle promoter and exons 3, 8, 12, 16, 17, 19, 32, 45, 48 and 51). We have observed different mobility shifts in bands corresponding to exons 8, 12, 43 and 51. In exons 17 and 45, altered electrophoretic patterns were found in different samples identifying polymorphisms already described.

  3. Comment on "Current fluctuations in non-equilibrium diffusive systems: an additivity principle"

    OpenAIRE

    Sukhorukov, Eugene V.; Jordan, Andrew N.

    2004-01-01

    We point out that the "additivity principle" and "scaling hypothesis" postulated by Bodineau and Derrida in Phys. Rev. Lett 92, 180601 (2004), follow naturally from the saddle point evaluation of a diffusive field theory.

  4. Blending the most fundamental Remote-Sensing principles (RS ...

    African Journals Online (AJOL)

    Blending the most fundamental Remote-Sensing principles (RS) with the most functional spatial knowledge (GIS) with the objective of the determination of the accident-prone palms and points (case study: Tehran-Hamadan Highway on Saveh Superhighway)

  5. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    Science.gov (United States)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  6. Magnetospheric Response Associated With Multiple Atmospheric Reflections of Precipitated Electrons in Aurora.

    Science.gov (United States)

    Khazanov, G. V.; Merkin, V. G.; Zesta, E.; Sibeck, D. G.; Grubbs, G. A., II; Chu, M.; Wiltberger, M. J.

    2017-12-01

    The magnetosphere and ionosphere are strongly coupled by precipitating electrons during storm times. Therefore, first principle simulations of precipitating electron fluxes are required to understand storm time variations of ionospheric conductances and related electric fields. As has been discussed by Khazanov et al. [2015 - 2017], the first step in such simulations is initiation of electron precipitation from the Earth's plasma sheet via wave particle interaction processes into both magnetically conjugate points, and the step 2 is the follow up of multiple atmospheric reflections of electron fluxes formed at the boundary between the ionosphere and magnetosphere of two magnetically conjugate points. To demonstrate this effect on the global magnetospheric response the Lyon-Fedder-Mobarry global magnetosphere model coupled with the Rice Convection Model of the inner magnetosphere has been used and run for the geomagnetic storm of 17 March 2013.

  7. IOM and DHHS meeting on making clinical practice guidelines appropriate for patients with multiple chronic conditions.

    Science.gov (United States)

    Goodman, Richard A; Boyd, Cynthia; Tinetti, Mary E; Von Kohorn, Isabelle; Parekh, Anand K; McGinnis, J Michael

    2014-01-01

    The increasing prevalence of Americans with multiple (2 or more) chronic conditions raises concerns about the appropriateness and applicability of clinical practice guidelines for patient management. Most guidelines clinicians currently rely on have been designed with a single chronic condition in mind, and many such guidelines are inattentive to issues related to comorbidities. In response to the need for guideline developers to address comorbidities in guidelines, the Department of Health and Human Services convened a meeting in May 2012 in partnership with the Institute of Medicine to identify principles and action options. Eleven principles to improve guidelines' attentiveness to the population with multiple chronic conditions were identified during the meeting. They are grouped into 3 interrelated categories: (1) principles intended to improve the stakeholder technical process for developing guidelines; (2) principles intended to strengthen content of guidelines in terms of multiple chronic conditions; and (3) principles intended to increase focus on patient-centered care. This meeting built upon previously recommended actions by identifying additional principles and options for government, guideline developers, and others to use in strengthening the applicability of clinical practice guidelines to the growing population of people with multiple chronic conditions. The suggested principles are helping professional societies to improve guidelines' attentiveness to persons with multiple chronic conditions.

  8. Dynamic performance of maximum power point tracking circuits using sinusoidal extremum seeking control for photovoltaic generation

    Science.gov (United States)

    Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.

    2011-04-01

    The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.

  9. A connection between the Uncertainty Principles on the real line and on the circle

    OpenAIRE

    Andersen, Nils Byrial

    2013-01-01

    The purpose of this short note is to exhibit a new connection between the Heisenberg Uncertainty Principle on the line and the Breitenberger Uncertainty Principle on the circle, by considering the commutator of the multiplication and difference operators on Bernstein functions

  10. The holographic principle, the equipartition of energy and Newton’s gravity

    Science.gov (United States)

    Sadiq, M.

    2017-12-01

    Assuming the equipartition of energy to hold on a holographic sphere, Erik Verlinde demonstrated that Newton’s gravity follows as an entropic force. Some comments are in place about Verlinde’s assumptions in his derivation. It is pointed out that the holographic principle allows for freedom up to a free scale factor in the choice of Planck scale area while leading to classical gravity. Similarity of this free parameter with the Immirzi parameter of loop quantum gravity is discussed. We point out that the equipartition of energy is inbuilt into the holographic principle and, therefore, need not be assumed from the outset.

  11. Principles of a new treatment algorithm in multiple sclerosis

    DEFF Research Database (Denmark)

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...

  12. THE DEVELOPMENT OF AN INSTRUMENT FOR MEASURING THE UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES.

    Science.gov (United States)

    MCCORMICK, FLOYD G.

    THE PURPOSE OF THE STUDY WAS TO DEVELOP AN INSTRUMENT FOR MEASURING PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT WITH IMPLICATIONS FOR VOCATIONAL AGRICULTURE. PRINCIPLES WERE IDENTIFIED FROM LITERATURE SELECTED BY AGRICULTURAL ECONOMISTS. FORTY-FIVE MULTIPLE-CHOICE QUESTIONS WERE REFINED ON THE BASIS OF RESULTS OF THREE PRETESTS AND…

  13. Exact thermodynamic principles for dynamic order existence and evolution in chaos

    International Nuclear Information System (INIS)

    Mahulikar, Shripad P.; Herwig, Heinz

    2009-01-01

    The negentropy proposed first by Schroedinger is re-examined, and its conceptual and mathematical definitions are introduced. This re-definition of negentropy integrates Schroedinger's intention of its introduction, and the subsequent diverse notions in literature. This negentropy is further corroborated by its ability to state the two exact thermodynamic principles: negentropy principle for dynamic order existence and principle of maximum negentropy production (PMNEP) for dynamic order evolution. These principles are the counterparts of the existing entropy principle and the law of maximum entropy production, respectively. The PMNEP encompasses the basic concepts in the evolution postulates by Darwin and de Vries. Perspectives of dynamic order evolution in literature point to the validity of PMNEP as the law of evolution. These two additional principles now enable unified explanation of order creation, existence, evolution, and destruction; using thermodynamics.

  14. Principles of Fourier analysis

    CERN Document Server

    Howell, Kenneth B

    2001-01-01

    Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...

  15. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  16. Comparison of four microfinance markets from the point of view of the effectuation theory, complemented by proposed musketeer principle illustrating forces within village banks

    Directory of Open Access Journals (Sweden)

    Hes Tomáš

    2017-03-01

    Full Text Available Microfinance services are essential tools of formalization of shadow economics, leveraging immature entrepreneurship with external capital. Given the importance of shadow economics for the social balance of developing countries, the importance of an answer to a question of how microfinance entities come into existence, is rather essential. While decision-taking process leading to entrepreneurship were explained by the effectuation theory developed in the 90’, these explanations were not concerned with the logics of creation of microenterprises in neither developing countries nor microfinance village banks. While the abovementioned theories explain the nascence of companies in environment of developed markets, importance of a focus on emerging markets related to large share of human society of microfinance clientele is obvious. The study provides a development streak to the effectuation Theory, adding the musketeer principle to the five effectuation principles proposed by Sarasvathy. Furthermore, the hitherto not considered relationship between social capital and effectuation related concepts is another proposal of the paper focusing on description of the nature of microfinance clientele from the point of view of effectuation theory and social capital drawing a comparison of microfinance markets in four countries, Turkey, Sierra Leone, Indonesia and Afghanistan.

  17. ‘Abstract Endangerment’, Two Harm Principles, and Two Routes to Criminalisation

    Directory of Open Access Journals (Sweden)

    R.A. Duff

    2015-12-01

    Full Text Available We need to distinguish, as theorists too often fail to distinguish, two distinct harm principles. One, the Harmful Conduct Principle, concerns the criminalisation of conduct that is itself harmful or dangerous: that principle cannot explain how we can have good reason to create offences of so-called ‘abstract endangerment’, of which many road traffic offences are good examples. We can explain such offences as those by appeal to a different harm principle, the Harm Prevention Principle. That principle, however, is a principle not of criminalisation, but of regulation: it gives us reason to regulate conduct if doing so will efficiently prevent harm, without imposing undue burdens on those whose conduct is regulated. We then have reason to criminalise violations of such regulations, not because such violations are always harmful, but if and because they are wrongful. This distinction, between two kinds of principle and two possible routes towards criminalisation, can be drawn whatever goals or values we posit as our starting points.

  18. Basic principles of fracture treatment in children.

    Science.gov (United States)

    Ömeroğlu, Hakan

    2018-04-01

    This review aims to summarize the basic treatment principles of fractures according to their types and general management principles of special conditions including physeal fractures, multiple fractures, open fractures, and pathologic fractures in children. Definition of the fracture is needed for better understanding the injury mechanism, planning a proper treatment strategy, and estimating the prognosis. As the healing process is less complicated, remodeling capacity is higher and non-union is rare, the fractures in children are commonly treated by non-surgical methods. Surgical treatment is preferred in children with multiple injuries, in open fractures, in some pathologic fractures, in fractures with coexisting vascular injuries, in fractures which have a history of failed initial conservative treatment and in fractures in which the conservative treatment has no/little value such as femur neck fractures, some physeal fractures, displaced extension and flexion type humerus supracondylar fractures, displaced humerus lateral condyle fractures, femur, tibia and forearm shaft fractures in older children and adolescents and unstable pelvis and acetabulum fractures. Most of the fractures in children can successfully be treated by non-surgical methods.

  19. THE LEVEL OF PROCESS MANAGEMENT PRINCIPLES APPLICATION IN SMEs IN THE SELECTED REGION OF THE CZECH REPUBLIC

    Directory of Open Access Journals (Sweden)

    Ladislav Rolínek

    2014-10-01

    Full Text Available This paper presents a methodology for calculating the indicators of the implementation of process management in SMEs (MPP and the analysis of results of process management principles use based on the number of employees. The data based of a questionnaire survey in 2011, of 187 small and mediumsized enterprises operating in the South Bohemian Region of the Czech Republic, was taken for the purposes of the research. The level of process management implementation in enterprises can be determined using the evaluation application of its principles (Truneček, 2003; Rolínek et al., 2012. Designed composite indicator MPP reflects the degree of implementation of the principles of process management. MPP is made up of the sum of the points that have been assigned to individual principles of process management, with the maximum score 21. Enterprises that were rated 16-21 points are considered as process managed, 6-15 points for partially managed, less than 6 points gained is procedurally unmanaged business. Process management principles are based on the findings of this indicator and MPP is applied to most medium-sized enterprises, while the least in micro-enterprises, which implies that the number of employees increases the utilization of the principles of process management. Results were adopted by Chi-square test of goodness of fit and correlation coefficient.

  20. PRINCIPLE OF THE ELECTRONIC EDUCATIONAL ENVIRONMENT SECURITY IN THE PROFESSIONAL TRAINING OF UNIVERSITY STUDENTS

    Directory of Open Access Journals (Sweden)

    Valery G. Tylets

    2017-12-01

    Full Text Available The article considers the problem of professional training of students in e-learning environment in accordance with the principle of security. The authors offer the essay technology of multiple difficulty levels. In the article the description of each level of technology proves its conformity to the positions of principle of security. The main methods of measurement performance were made by expert assessment and subjective scaling. The analysis of results of approbation of essay technology of multiple difficulty levels in the experimental sample showed an increase of objective and subjective indicators. Positive methodological and personal effects of the introduction of technology into the process of university education were identified, corresponding to the positions of principle of security. Methodical recommendations of application of technology were formulated.

  1. Structural phases arising from reconstructive and isostructural transitions in high-melting-point oxides under hydrostatic pressure: A first-principles study

    Science.gov (United States)

    Tian, Hao; Kuang, Xiao-Yu; Mao, Ai-Jie; Yang, Yurong; Xu, Changsong; Sayedaghaee, S. Omid; Bellaiche, L.

    2018-01-01

    High-melting-point oxides of chemical formula A B O3 with A =Ca , Sr, Ba and B =Zr , Hf are investigated as a function of hydrostatic pressure up to 200 GPa by combining first-principles calculations with a particle swarm optimization method. Ca- and Sr-based systems: (1) first undergo a reconstructive phase transition from a perovskite state to a novel structure that belongs to the post-post-perovskite family and (2) then experience an isostructural transition to a second, also new post-post-perovskite state at higher pressures, via the sudden formation of a specific out-of-plane B -O bond. In contrast, the studied Ba compounds evolve from a perovskite phase to a third novel post-post-perovskite structure via another reconstructive phase transition. The original characteristics of these three different post-post-perovskite states are emphasized. Unusual electronic properties, including significant piezochromic effects and an insulator-metal transition, are also reported and explained.

  2. Collaboration between a human group and artificial intelligence can improve prediction of multiple sclerosis course: a proof-of-principle study.

    Science.gov (United States)

    Tacchella, Andrea; Romano, Silvia; Ferraldeschi, Michela; Salvetti, Marco; Zaccaria, Andrea; Crisanti, Andrea; Grassi, Francesca

    2017-01-01

    Background: Multiple sclerosis has an extremely variable natural course. In most patients, disease starts with a relapsing-remitting (RR) phase, which proceeds to a secondary progressive (SP) form. The duration of the RR phase is hard to predict, and to date predictions on the rate of disease progression remain suboptimal. This limits the opportunity to tailor therapy on an individual patient's prognosis, in spite of the choice of several therapeutic options. Approaches to improve clinical decisions, such as collective intelligence of human groups and machine learning algorithms are widely investigated. Methods: Medical students and a machine learning algorithm predicted the course of disease on the basis of randomly chosen clinical records of patients that attended at the Multiple Sclerosis service of Sant'Andrea hospital in Rome. Results: A significant improvement of predictive ability was obtained when predictions were combined with a weight that depends on the consistence of human (or algorithm) forecasts on a given clinical record. Conclusions: In this work we present proof-of-principle that human-machine hybrid predictions yield better prognoses than machine learning algorithms or groups of humans alone. To strengthen this preliminary result, we propose a crowdsourcing initiative to collect prognoses by physicians on an expanded set of patients.

  3. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  4. Use of multiple water surface flow constructed wetlands for non-point source water pollution control.

    Science.gov (United States)

    Li, Dan; Zheng, Binghui; Liu, Yan; Chu, Zhaosheng; He, Yan; Huang, Minsheng

    2018-05-02

    Multiple free water surface flow constructed wetlands (multi-FWS CWs) are a variety of conventional water treatment plants for the interception of pollutants. This review encapsulated the characteristics and applications in the field of ecological non-point source water pollution control technology. The roles of in-series design and operation parameters (hydraulic residence time, hydraulic load rate, water depth and aspect ratio, composition of influent, and plant species) for performance intensification were also analyzed, which were crucial to achieve sustainable and effective contaminants removal, especially the retention of nutrient. The mechanism study of design and operation parameters for the removal of nitrogen and phosphorus was also highlighted. Conducive perspectives for further research on optimizing its design/operation parameters and advanced technologies of ecological restoration were illustrated to possibly interpret the functions of multi-FWS CWs.

  5. Modelling a real-world buried valley system with vertical non-stationarity using multiple-point statistics

    DEFF Research Database (Denmark)

    He, Xiulan; Sonnenborg, Torben; Jørgensen, Flemming

    2017-01-01

    -stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system......Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has...... the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non...

  6. [The ethics of principles and ethics of responsibility].

    Science.gov (United States)

    Cembrani, Fabio

    2016-01-01

    In his brief comment, the author speculates if ethics in health-care relationship it still has a practical sense.The essay points out the difference between principles ethics and ethics of responsibility, supporting the latter and try to highlight its constitutive dimensions.

  7. Teaching Statistical Principles with a Roulette Simulation

    Directory of Open Access Journals (Sweden)

    Graham D Barr

    2013-03-01

    Full Text Available This paper uses the game of roulette in a simulation setting to teach students in an introductory Stats course some basic issues in theoretical and empirical probability. Using an Excel spreadsheet with embedded VBA (Visual Basic for Applications, one can simulate the empirical return and empirical standard deviation for a range of bets in Roulette over some predetermined number of plays. In particular, the paper illustrates the difference between different playing strategies by contrasting a low payout bet (say a bet on “red” and a high payout bet (say a bet on a particular number by considering the expected return and volatility associated with the bets. The paper includes an Excel VBA based simulation of the Roulette wheel where students can make bets and monitor the return on the bets for one play or multiple plays. In addition it includes a simulation of the casino house advantage for repeated multiple plays; that is, it allows students to see how casinos may derive a new certain return equal to the house advantage by entertaining large numbers of bets which will systematically drive the volatility of the house advantage down to zero. This simulation has been shown to be especially effective at theUniversityofCape Townfor teaching first year Statistics students the subtler points of probability, as well as encouraging discussions around the risk-return trade-off facing gamblers. The program has also been shown to be useful for teaching students the principles of theoretical and empirical probabilities as well as an understanding of volatility.

  8. Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis

    Science.gov (United States)

    Logan, Jessica

    2017-01-01

    The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…

  9. The principles of electronic and electromechanic power conversion a systems approach

    CERN Document Server

    Ferreira, Braham

    2013-01-01

    Teaching the principles of power electronics and electromechanical power conversion through a unique top down systems approach, The Principles of Electromechanical Power Conversion takes the role and system context of power conversion functions as the starting point. Following this approach, the text defines the building blocks of the system and describes the theory of how they exchange power with each other. The authors introduce a modern, simple approach to machines, which makes the principles of field oriented control and space vector theory approachable to undergraduate students as well as

  10. An Asynchronous IEEE Floating-Point Arithmetic Unit

    Directory of Open Access Journals (Sweden)

    Joel R. Noche

    2007-12-01

    Full Text Available An asynchronous floating-point arithmetic unit is designed and tested at the transistor level usingCadence software. It uses CMOS (complementary metal oxide semiconductor and DCVS (differentialcascode voltage switch logic in a 0.35 µm process using a 3.3 V supply voltage, with dual-rail data andsingle-rail control signals using four-phase handshaking.Using 17,085 transistors, the unit handles single-precision (32-bit addition/subtraction, multiplication,division, and remainder using the IEEE 754-1985 Standard for Binary Floating-Point Arithmetic, withrounding and other operations to be handled by separate hardware or software. Division and remainderare done using a restoring subtractive algorithm; multiplication uses an additive algorithm. Exceptionsare noted by flags (and not trap handlers and the output is in single-precision.Previous work on asynchronous floating-point arithmetic units have mostly focused on single operationssuch as division. This is the first work to the authors' knowledge that can perform floating-point addition,multiplication, division, and remainder using a common datapath.

  11. Automatic dew-point temperature sensor.

    Science.gov (United States)

    Graichen, H; Rascati, R; Gonzalez, R R

    1982-06-01

    A device is described for measuring dew-point temperature and water vapor pressure in small confined areas. The method is based on the deposition of water on a cooled surface when at dew-point temperature. A small Peltier module lowers the temperature of two electrically conductive plates. At dew point the insulating gap separating the plates becomes conductive as water vapor condenses. Sensors based on this principle can be made small and rugged and can be used for measuring directly the local water vapor pressure. They may be installed within a conventional ventilated sweat capsule used for measuring water vapor loss from the skin surface. A novel application is the measurement of the water vapor pressure gradients across layers of clothing worn by an exercising subject.

  12. Cleaning Massive Sonar Point Clouds

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Larsen, Kasper Green; Mølhave, Thomas

    2010-01-01

    We consider the problem of automatically cleaning massive sonar data point clouds, that is, the problem of automatically removing noisy points that for example appear as a result of scans of (shoals of) fish, multiple reflections, scanner self-reflections, refraction in gas bubbles, and so on. We...

  13. Connecting Corporate Human Rights Responsibilities and State Obligations under the UN Guiding Principles

    DEFF Research Database (Denmark)

    Buhmann, Karin

    2017-01-01

    Taking its point of departure in the UN Guiding Principles on Business and Human Rights (UNGP), this chapter discusses the complementarity between Pillars One on the State Duty to Respect and Pillar Two the Corporate Responsibility to Respect Human Rights. It does this through HRDD and communicat......Taking its point of departure in the UN Guiding Principles on Business and Human Rights (UNGP), this chapter discusses the complementarity between Pillars One on the State Duty to Respect and Pillar Two the Corporate Responsibility to Respect Human Rights. It does this through HRDD...

  14. Applying legal principles to stimulate open standards: the role of forums and consortia

    NARCIS (Netherlands)

    Hoenkamp, R.A.; Folmer, E.J.A.; Huitema, G.B

    2012-01-01

    In this paper it is argued that openness in standards raises its quality level. This study is done not only from a technical business administration point of view but also from a legal perspective. It is shown that applying legal principles, in particular the principles of Good Governance can

  15. Testing the principle of equivalence by solar neutrinos

    International Nuclear Information System (INIS)

    Minakata, Hisakazu; Washington Univ., Seattle, WA; Nunokawa, Hiroshi; Washington Univ., Seattle, WA

    1994-04-01

    We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs a quantum mechanical phenomenon of neutrino oscillation to probe into the non-university of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can improve the existing bound on violation of the equivalence principle by 3-4 orders of magnitude if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem

  16. Testing the principle of equivalence by solar neutrinos

    International Nuclear Information System (INIS)

    Minakata, H.; Nunokawa, H.

    1995-01-01

    We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle, quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs the quantum mechanical phenomenon of neutrino oscillation to probe into the nonuniversality of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the Sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can place stringent bounds on the violation of the equivalence principle to 1 part in 10 15 --10 16 if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem

  17. Meta-Analyses of Seven of NIDA’s Principles of Drug Addiction Treatment

    Science.gov (United States)

    Pearson, Frank S.; Prendergast, Michael L.; Podus, Deborah; Vazan, Peter; Greenwell, Lisa; Hamilton, Zachary

    2011-01-01

    Seven of the 13 Principles of Drug Addiction Treatment disseminated by the National Institute on Drug Abuse (NIDA) were meta-analyzed as part of the Evidence-based Principles of Treatment (EPT) project. By averaging outcomes over the diverse programs included in EPT, we found that five of the NIDA principles examined are supported: matching treatment to the client’s needs; attending to the multiple needs of clients; behavioral counseling interventions; treatment plan reassessment; and counseling to reduce risk of HIV. Two of the NIDA principles are not supported: remaining in treatment for an adequate period of time and frequency of testing for drug use. These weak effects could be the result of the principles being stated too generally to apply to the diverse interventions and programs that exist or of unmeasured moderator variables being confounded with the moderators that measured the principles. Meta-analysis should be a standard tool for developing principles of effective treatment for substance use disorders. PMID:22119178

  18. Portable biosensors and point-of-care systems

    CERN Document Server

    Kintzios, Spyridon E

    2017-01-01

    This book describes the principles, design and applications of a new generation of analytical and diagnostic biomedical devices, characterized by their very small size, ease of use, multi-analytical capabilities and speed to provide handheld and mobile point-of-care (POC) diagnostics.

  19. The Principle of Polyrepresentation: Document Representations Created by Different Agents

    Directory of Open Access Journals (Sweden)

    Dora Rubinić

    2015-03-01

    Full Text Available Abstract Purpose:The paper gives a review of literature on the principle of polyrepresentation formulated by Ingwersen in the nineties of the 20th century. The principle of polyrepresentation points out the necessity of existence of different cognitive and functional representations of the same document, created by different agents in order to answer to different representations of user’s needs. The main goal of the paper is to give an overview of the principle of polyrepresentation as well as the translation of terms into Croatian which provides an opportunity for further development of terminology of related areas, e.g. information retrieval, subject indexing etc. Methodology/approach: The method used in this paper is the analysis of selected research papers on development of the principle of polyrepresentation. The literature was selected due to its importance and approach to the topic and was limited to papers mostly dealing with subject access to documents.Research limitation: The review was limited to just one aspect of the model – the representations of documents. The second part of the model – the cognitive sphere and its application in IR systems was excluded from this paper.Originality/practical implications: The paper implies the importance of the principle of polyrepresentation in the context of current trends in subject indexing in online systems. Although there is a number of articles referring to the principle, as well as some empirical researches using some elements of the principle, subject access of it is often not included in them. This paper emphasizes the importance of the principle primarily in the context of current trends in subject indexing used in online systems (e. g. use of subject headings or access points in online catalogues, social tagging, including different agents involved in subject indexing online etc.. It also recommends translations of selected terms into Croatian and invites researchers to discuss

  20. Temporal Evolution of Design Principles in Engineering Systems: Analogies with Human Evolution

    DEFF Research Database (Denmark)

    Deb, Kalyanmoy; Bandaru, Sunith; Tutum, Cem Celal

    2012-01-01

    constructed later during optimization. Interestingly, there exists a simile between evolution of design principles with that of human evolution. Such information about the hierarchy of key design principles should enable designers to have a deeper understanding of their problems.......Optimization of an engineering system or component makes a series of changes in the initial random solution(s) iteratively to form the final optimal shape. When multiple conflicting objectives are considered, recent studies on innovization revealed the fact that the set of Pareto-optimal solutions...... portray certain common design principles. In this paper, we consider a 14-variable bi-objective design optimization of a MEMS device and identify a number of such common design principles through a recently proposed automated innovization procedure. Although these design principles are found to exist...

  1. The Alara principle in backfitting Borssele

    International Nuclear Information System (INIS)

    Leurs, C.J.

    1998-01-01

    An extensive backfitting program, the Modifications Project, was carried out at the Borssele Nuclear Power Station. It involved sixteen modifications to technical systems. The scope of activities, and the dose rates encountered in places where work was to be performed, made it obvious from the outset that a high collective dose had to be anticipated. As a consequence, radiation protection within the project was organized in such a way that applicable radiation protection principles were applied in all phases of the project. From the point of view of radiation protection, the Modifications Project had to be subdivided into three phases, i.e., a conceptual design phase in which mainly the justification principle was applied; the engineering phase in which the Alara principle was employed; the execution phase in which management of the (internal) dose limits had to be observed in addition to the Alara principle. Throughout all project phases, radiation protection considerations and results were documented in so-called Alara reports and radiation protection checklists. As a result of the strictest possible observance of radiation protection principles in all phases of the project, a collective dose of 2505 mSv was achieved, which stands for a reduction by a factor of 4 compared to the very first estimate. In view of the scope and complex nature of the activities involved, and the radiation levels in the Borssele Nuclear Power Station, this is an excellent result. (orig.) [de

  2. Investigating lithological and geophysical relationships with applications to geological uncertainty analysis using Multiple-Point Statistical methods

    DEFF Research Database (Denmark)

    Barfod, Adrian

    The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...

  3. Wave-particle duality and Bohr's complementarity principle in quantum mechanics

    International Nuclear Information System (INIS)

    Sen, D.; Basu, A.N.; Sengupta, S.

    1995-01-01

    Interest on Bohr's complementarity principle has recently been revived particularly because of several thought experiments and some actually performed experiments to test the validity of mutual exclusiveness of wave and particle properties. A critical review of the situation is undertaken and it is pointed out that the problem with mutual exclusiveness arises because of some vagueness in the conventional formulation. An attempt is made to remove this vagueness by connecting the origin of mutual exclusiveness to some principles of quantum mechanics. Accordingly, it becomes obvious that to contradict complementarity principle without contradicting quantum mechanics would be impossible. Some of the recent experiments are critically analysed. (author). 31 refs., 3 ills

  4. Multiple-output all-optical header processing technique based on two-pulse correlation principle

    NARCIS (Netherlands)

    Calabretta, N.; Liu, Y.; Waardt, de H.; Hill, M.T.; Khoe, G.D.; Dorren, H.J.S.

    2001-01-01

    A serial all-optical header processing technique based on a two-pulse correlation principle in a semiconductor laser amplifier in a loop mirror (SLALOM) configuration that can have a large number of output ports is presented. The operation is demonstrated experimentally at a 10Gbit/s Manchester

  5. A new adaptive light beam focusing principle for scanning light stimulation systems.

    Science.gov (United States)

    Bitzer, L A; Meseth, M; Benson, N; Schmechel, R

    2013-02-01

    In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.

  6. The boiling point of stratospheric aerosols.

    Science.gov (United States)

    Rosen, J. M.

    1971-01-01

    A photoelectric particle counter was used for the measurement of aerosol boiling points. The operational principle involves raising the temperature of the aerosol by vigorously heating a portion of the intake tube. At or above the boiling point, the particles disintegrate rather quickly, and a noticeable effect on the size distribution and concentration is observed. Stratospheric aerosols appear to have the same volatility as a solution of 75% sulfuric acid. Chemical analysis of the aerosols indicates that there are other substances present, but that the sulfate radical is apparently the major constituent.

  7. Early warning of climate tipping points

    Science.gov (United States)

    Lenton, Timothy M.

    2011-07-01

    A climate 'tipping point' occurs when a small change in forcing triggers a strongly nonlinear response in the internal dynamics of part of the climate system, qualitatively changing its future state. Human-induced climate change could push several large-scale 'tipping elements' past a tipping point. Candidates include irreversible melt of the Greenland ice sheet, dieback of the Amazon rainforest and shift of the West African monsoon. Recent assessments give an increased probability of future tipping events, and the corresponding impacts are estimated to be large, making them significant risks. Recent work shows that early warning of an approaching climate tipping point is possible in principle, and could have considerable value in reducing the risk that they pose.

  8. The precautionary principle in international environmental law and international jurisprudence

    Directory of Open Access Journals (Sweden)

    Tubić Bojan

    2014-01-01

    Full Text Available This paper analysis international regulation of the precautionary principle as one of environmental principles. This principle envisages that when there are threats of serious and irreparable harm, as a consequence of certain economic activity, the lack of scientific evidence and full certainty cannot be used as a reason for postponing efficient measures for preventing environmental harm. From economic point of view, the application of precautionary principle is problematic, because it creates larger responsibility for those who create possible risks, comparing to the previous period. The precautionary principle can be found in numerous international treaties in this field, which regulate it in a very similar manner. There is no consensus in doctrine whether this principle has reached the level of international customary law, because it was interpreted differently and it was not accepted by large number of countries in their national legislations. It represents a developing concept which is consisted of changing positions on adequate roles of science, economy, politics and law in the field of environmental protection. This principle has been discussed in several cases before International Court of Justice and International Tribunal for the Law of the Sea.

  9. Anthropic principle in biology and radiation biology

    International Nuclear Information System (INIS)

    Akif'ev, A. P.; Degtyarev, S.V.

    1999-01-01

    It was suggested to add the anthropic principle of the Universe according to which the physical constants of fundamental particles of matter and the laws of their counteraction are those that an appearance of man and mind becomes possible and necessary, with some biological constants to the set of fundamental constants. With reparation of DNA as an example it was shown how a cell ran some parameters of Watson-Crick double helix. It was pointed that the concept of the anthropic principle of the Universe in its full body including biological constants was a key to developing of a unified theory of evolution of the Universe within the limits of scientific creationism [ru

  10. Strong quantum violation of the gravitational weak equivalence principle by a non-Gaussian wave packet

    International Nuclear Information System (INIS)

    Chowdhury, P; Majumdar, A S; Sinha, S; Home, D; Mousavi, S V; Mozaffari, M R

    2012-01-01

    The weak equivalence principle of gravity is examined at the quantum level in two ways. First, the position detection probabilities of particles described by a non-Gaussian wave packet projected upwards against gravity around the classical turning point and also around the point of initial projection are calculated. These probabilities exhibit mass dependence at both these points, thereby reflecting the quantum violation of the weak equivalence principle. Second, the mean arrival time of freely falling particles is calculated using the quantum probability current, which also turns out to be mass dependent. Such a mass dependence is shown to be enhanced by increasing the non-Gaussianity parameter of the wave packet, thus signifying a stronger violation of the weak equivalence principle through a greater departure from Gaussianity of the initial wave packet. The mass dependence of both the position detection probabilities and the mean arrival time vanishes in the limit of large mass. Thus, compatibility between the weak equivalence principle and quantum mechanics is recovered in the macroscopic limit of the latter. A selection of Bohm trajectories is exhibited to illustrate these features in the free fall case. (paper)

  11. Adaptive aspirations and performance heterogeneity : Attention allocation among multiple reference points

    NARCIS (Netherlands)

    Blettner, D.P.; He, Z.; Hu, S.; Bettis, R.

    Organizations learn and adapt their aspiration levels based on reference points (prior aspiration, prior performance, and prior performance of reference groups). The relative attention that organizations allocate to these reference points impacts organizational search and strategic decisions.

  12. Physical Consequences of Mathematical Principles

    Directory of Open Access Journals (Sweden)

    Comay E.

    2009-10-01

    Full Text Available Physical consequences are derived from the following mathematical structures: the variational principle, Wigner’s classifications of the irreducible representations of the Poincar ́ e group and the duality invariance of the homogeneous Maxwell equations. The analysis is carried out within the validity domain of special relativity. Hierarchical re- lations between physical theories are used. Some new results are pointed out together with their comparison with experimental data. It is also predicted that a genuine Higgs particle will not be detected.

  13. The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP

    CERN Document Server

    Gigi, Dominique; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; da Silva Gomes, Diego; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gladki, Maciej; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre; Janulis, Mindaugas; Lettrich, Michael; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius K; Morovic, Srecko; O'Dell, Vivian; Orn, Samuel Johan; Orsini, Luciano; Papakrivopoulos, Ioannis; Paus, Christoph; Petrova, Petia; Petrucci, Andrea; Pieri, Marco; Rabady, Dinyar; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Vazquez Velez, Cristina; Vougioukas, Michail; Zejdl, Petr

    2017-01-01

    In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation.

  14. Demonstration of Human-Autonomy Teaming Principles

    Science.gov (United States)

    Shively, Robert Jay

    2016-01-01

    Known problems with automation include lack of mode awareness, automation brittleness, and risk of miscalibrated trust. Human-Autonomy Teaming (HAT) is essential for improving these problems. We have identified some critical components of HAT and ran a part-task study to introduce these components to a ground station that supports flight following of multiple aircraft. Our goal was to demonstrate, evaluate, and refine HAT principles. This presentation provides a brief summary of the study and initial findings.

  15. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  16. The scope of the LeChatelier Principle

    Science.gov (United States)

    George M., Lady; Quirk, James P.

    2007-07-01

    LeChatelier [Comptes Rendus 99 (1884) 786; Ann. Mines 13 (2) (1888) 157] showed that a physical system's “adjustment” to a disturbance to its equilibrium tended to be smaller as constraints were added to the adjustment process. Samuelson [Foundations of Economic Analysis, Harvard University Press, Cambridge, 1947] applied this result to economics in the context of the comparative statics of the actions of individual agents characterized as the solutions to optimization problems; and later (1960), extended the application of the Principle to a stable, multi-market equilibrium and the case of all commodities gross substitutes [e.g., L. Metzler, Stability of multiple markets: the hicks conditions. Econometrica 13 (1945) 277-292]. Refinements and alternative routes of derivation have appeared in the literature since then, e.g., Silberberg [The LeChatelier Principle as a corollary to a generalized envelope theorem, J. Econ. Theory 3 (1971) 146-155; A revision of comparative statics methodology in economics, or, how to do comparative statics on the back of an envelope, J. Econ. Theory 7 (1974) 159-172], Milgrom and Roberts [The LeChatelier Principle, Am. Econ. Rev. 86 (1996) 173-179], W. Suen, E. Silberberg, P. Tseng [The LeChatelier Principle: the long and the short of it, Econ. Theory 16 (2000) 471-476], and Chavas [A global analysis of constrained behavior: the LeChatelier Principle ‘in the large’, South. Econ. J. 72 (3) (2006) 627-644]. In this paper, we expand the scope of the Principle in various ways keyed to Samuelson's proposed means of testing comparative statics results (optimization, stability, and qualitative analysis). In the optimization framework, we show that the converse LeChatelier Principle also can be found in constrained optimization problems and for not initially “conjugate” sensitivities. We then show how the Principle and its converse can be found through the qualitative analysis of any linear system. In these terms, the Principle and

  17. A fixed-point farrago

    CERN Document Server

    Shapiro, Joel H

    2016-01-01

    This text provides an introduction to some of the best-known fixed-point theorems, with an emphasis on their interactions with topics in analysis. The level of exposition increases gradually throughout the book, building from a basic requirement of undergraduate proficiency to graduate-level sophistication. Appendices provide an introduction to (or refresher on) some of the prerequisite material and exercises are integrated into the text, contributing to the volume’s ability to be used as a self-contained text. Readers will find the presentation especially useful for independent study or as a supplement to a graduate course in fixed-point theory. The material is split into four parts: the first introduces the Banach Contraction-Mapping Principle and the Brouwer Fixed-Point Theorem, along with a selection of interesting applications; the second focuses on Brouwer’s theorem and its application to John Nash’s work; the third applies Brouwer’s theorem to spaces of infinite dimension; and the fourth rests ...

  18. A generalization of Fermat's principle for classical and quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Elsayed, Tarek A., E-mail: T.Elsayed@thphys.uni-heidelberg.de

    2014-09-12

    Highlights: • Introduces a generalized Fermat principle for many-dimensional dynamical systems. • Deals with the time taken by the system between given initial and final states. • Proposes that if the speed of the system point is constant, the time is an extremum. • Justified for the phase space of harmonic oscillators and the projective Hilbert space. • A counterexample for the motion of a charge in a magnetic field is discussed. - Abstract: The analogy between dynamics and optics had a great influence on the development of the foundations of classical and quantum mechanics. We take this analogy one step further and investigate the validity of Fermat's principle in many-dimensional spaces describing dynamical systems (i.e., the quantum Hilbert space and the classical phase and configuration space). We propose that if the notion of a metric distance is well defined in that space and the velocity of the representative point of the system is an invariant of motion, then a generalized version of Fermat's principle will hold. We substantiate this conjecture for time-independent quantum systems and for a classical system consisting of coupled harmonic oscillators. An exception to this principle is the configuration space of a charged particle in a constant magnetic field; in this case the principle is valid in a frame rotating by half the Larmor frequency, not the stationary lab frame.

  19. A generalization of Fermat's principle for classical and quantum systems

    International Nuclear Information System (INIS)

    Elsayed, Tarek A.

    2014-01-01

    Highlights: • Introduces a generalized Fermat principle for many-dimensional dynamical systems. • Deals with the time taken by the system between given initial and final states. • Proposes that if the speed of the system point is constant, the time is an extremum. • Justified for the phase space of harmonic oscillators and the projective Hilbert space. • A counterexample for the motion of a charge in a magnetic field is discussed. - Abstract: The analogy between dynamics and optics had a great influence on the development of the foundations of classical and quantum mechanics. We take this analogy one step further and investigate the validity of Fermat's principle in many-dimensional spaces describing dynamical systems (i.e., the quantum Hilbert space and the classical phase and configuration space). We propose that if the notion of a metric distance is well defined in that space and the velocity of the representative point of the system is an invariant of motion, then a generalized version of Fermat's principle will hold. We substantiate this conjecture for time-independent quantum systems and for a classical system consisting of coupled harmonic oscillators. An exception to this principle is the configuration space of a charged particle in a constant magnetic field; in this case the principle is valid in a frame rotating by half the Larmor frequency, not the stationary lab frame

  20. Using Physics Principles in the Teaching of Chemistry.

    Science.gov (United States)

    Gulden, Warren

    1996-01-01

    Presents three examples that show how students can use traditional physics principles or laws for the purpose of understanding chemistry better. Examples include Coulomb's Law and melting points, the Faraday Constant, and the Rydberg Constant. Presents a list of some other traditional topics in a chemistry course that could be enhanced by the…

  1. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Directory of Open Access Journals (Sweden)

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  2. Rotating detectors and Mach's principle

    International Nuclear Information System (INIS)

    Paola, R.D.M. de; Svaiter, N.F.

    2000-08-01

    In this work we consider a quantum version of Newton s bucket experiment in a fl;at spacetime: we take an Unruh-DeWitt detector in interaction with a real massless scalar field. We calculate the detector's excitation rate when it is uniformly rotating around some fixed point and the field is prepared in the Minkowski vacuum and also when the detector is inertial and the field is in the Trocheries-Takeno vacuum state. These results are compared and the relations with Mach's principle are discussed. (author)

  3. Microsurgical principles and postoperative adhesions: lessons from the past.

    Science.gov (United States)

    Gomel, Victor; Koninckx, Philippe R

    2016-10-01

    "Microsurgery" is a set of principles developed to improve fertility surgery outcomes. These principles were developed progressively based on common sense and available evidence, under control of clinical feedback obtained with the use of second-look laparoscopy. Fertility outcome was the end point; significant improvement in fertility rates validated the concept clinically. Postoperative adhesion formation being a major cause of failure in fertility surgery, the concept of microsurgery predominantly addresses prevention of postoperative adhesions. In this concept, magnification with a microscope or laparoscope plays a minor role as technical facilitator. Not surprisingly, the principles to prevent adhesion formation are strikingly similar to our actual understanding: gentle tissue handling, avoiding desiccation, irrigation at room temperature, shielding abdominal contents from ambient air, meticulous hemostasis and lavage, avoiding foreign body contamination and infection, administration of dexamethasone postoperatively, and even the concept of keeping denuded areas separated by temporary adnexal or ovarian suspension. The actual concepts of peritoneal conditioning during surgery and use of dexamethasone and a barrier at the end of surgery thus confirm without exception the tenets of microsurgery. Although recent research helped to clarify the pathophysiology of adhesion formation, refined its prevention and the relative importance of each factor, the clinical end point of improvement of fertility rates remains demonstrated for only the microsurgical tenets as a whole. In conclusion, the principles of microsurgery remain fully valid as the cornerstones of reproductive microsurgery, whether performed by means of open access or laparoscopy. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  4. Hospital employs TQM principles to rework its evaluation system.

    Science.gov (United States)

    Burda, D

    1992-02-24

    One Kansas hospital has taken the traditional employee evaluation process--with all its performance criteria, point systems and rankings--and turned it on its head. The new system employs total quality management principles and promotes personal development, education and teamwork. And everyone gets the same raise.

  5. A projective constrained variational principle for a classical particle with spin

    International Nuclear Information System (INIS)

    Amorim, R.

    1983-01-01

    A geometric approach for variational principles with constraints is applied to obtain the equations of motion of a classical charged point particle with magnetic moment interacting with an external eletromagnetic field. (Author) [pt

  6. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  7. Design principles and operating principles: the yin and yang of optimal functioning.

    Science.gov (United States)

    Voit, Eberhard O

    2003-03-01

    Metabolic engineering has as a goal the improvement of yield of desired products from microorganisms and cell lines. This goal has traditionally been approached with experimental biotechnological methods, but it is becoming increasingly popular to precede the experimental phase by a mathematical modeling step that allows objective pre-screening of possible improvement strategies. The models are either linear and represent the stoichiometry and flux distribution in pathways or they are non-linear and account for the full kinetic behavior of the pathway, which is often significantly effected by regulatory signals. Linear flux analysis is simpler and requires less input information than a full kinetic analysis, and the question arises whether the consideration of non-linearities is really necessary for devising optimal strategies for yield improvements. The article analyzes this question with a generic, representative pathway. It shows that flux split ratios, which are the key criterion for linear flux analysis, are essentially sufficient for unregulated, but not for regulated branch points. The interrelationships between regulatory design on one hand and optimal patterns of operation on the other suggest the investigation of operating principles that complement design principles, like a user's manual complements the hardwiring of electronic equipment.

  8. Food ionization: principles, nutritional aspects and detection

    International Nuclear Information System (INIS)

    Raffi, J.

    1992-01-01

    This document reviews the possible applications of ionizing radiations in the food industry, pointing out the principles of the treatment and its consequences on the nutritionnal value of the product. The last part gives the present status of the researches about the identification of irradiated foodstuffs and of the concerted action sponsored by the Community Bureau of Reference from the Commission of the European Communities

  9. Discrimination Training Reduces High Rate Social Approach Behaviors in Angelman Syndrome: Proof of Principle

    Science.gov (United States)

    Heald, M.; Allen, D.; Villa, D.; Oliver, C.

    2013-01-01

    This proof of principle study was designed to evaluate whether excessively high rates of social approach behaviors in children with Angelman syndrome (AS) can be modified using a multiple schedule design. Four children with AS were exposed to a multiple schedule arrangement, in which social reinforcement and extinction, cued using a novel…

  10. CRISTAL V2 Package: Principles and validation domain

    International Nuclear Information System (INIS)

    Gomit, Jean-Michel; Cochet, Bertrand; Leclaire, Nicolas; Carmouze, Coralie; Damian, Frederic; Entringer, Arnaud; Gagnier, Emmanuel

    2017-04-01

    The purpose of this document is to provide a comprehensive and global view of the CRISTAL V2 package. In particular, it sets out the principles of the computational approaches available to the user, through four calculation 'routes': - the 'multigroup Monte Carlo' route, - the 'multigroup deterministic' route, - the 'point-wise Monte Carlo' route, - the 'criticality standard calculation' route. (authors)

  11. Common pitfalls in statistical analysis: The perils of multiple testing

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  12. Small Area Indices of Multiple Deprivation in South Africa

    Science.gov (United States)

    Noble, Michael; Barnes, Helen; Wright, Gemma; Roberts, Benjamin

    2010-01-01

    This paper presents the Provincial Indices of Multiple Deprivation that were constructed by the authors at ward level using 2001 Census data for each of South Africa's nine provinces. The principles adopted in conceptualising the indices are described and multiple deprivation is defined as a weighted combination of discrete dimensions of…

  13. What are the core ideas behind the Precautionary Principle?

    Science.gov (United States)

    Persson, Erik

    2016-07-01

    The Precautionary Principle is both celebrated and criticized. It has become an important principle for decision making, but it is also subject to criticism. One problem that is often pointed out with the principle is that is not clear what it actually says and how to use it. I have taken on this problem by performing an analysis of some of the most influential formulations of the principle in an attempt to identify the core ideas behind it, with the purpose of producing a formulation of the principle that is clear and practically applicable. It was found that what is called the Precautionary Principle is not a principle that tells us what do to achieve extra precaution or how to handle situations when extra precaution is called for. Instead, it was found to be a list of circumstances that each justify extra precaution. An analysis of some of the most common and influential formulations of the Precautionary Principle identified four such circumstances: (1) When we deal with important values that tend to be systematically downplayed by traditional decision methods - such as human health and the environment. (2) When we suspect that the decision might lead to irreversible and severe consequences and the values at stake are also irreplaceable, (3) When timing is at least as important as being right. (4) When it is more important to avoid false negatives than false positives. This interpretation of the Precautionary Principle does not say anything about what kind of actions to take when extra precaution is called for, but it does provide a clear and practically useful list of circumstances that call for extra precaution and that is not subject to the most common objections to the Precautionary Principle. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Huygens-Feynman-Fresnel principle as the basis of applied optics.

    Science.gov (United States)

    Gitin, Andrey V

    2013-11-01

    The main relationships of wave optics are derived from a combination of the Huygens-Fresnel principle and the Feynman integral over all paths. The stationary-phase approximation of the wave relations gives the correspondent relations from the point of view of geometrical optics.

  15. LAW PRINCIPLES AND SOCIAL PHILOSOPHY. DOCTRINE OF THE SOCIAL CONTRACT

    Directory of Open Access Journals (Sweden)

    Marius ANDREESCU

    2017-07-01

    Full Text Available e, the understanding of the significances of the “principle of law” needs to have an interdisciplinary character, the basis for the approach being the philosophy of the law. In this study we fulfill such an analysis with the purpose to underline the multiple theoretical significances due to this concept, but also the relationship between the juridical principles and norms, respectively the normative value of the principle of the law. Thus are being materialized extensive references to the philosophical and juridical doctrine in the matter. This study is a pleading to refer to the principles, in the work for the law’s creation and application. Starting with the difference between “given” and ‘constructed” we propose the distinction between the “metaphysical principles” outside the law, which by their contents have philosophical significances, and the “constructed principles” elaborated inside the law. We emphasize the obligation of the law maker, but also of the expert to refer to the principles in the work of legislation, interpretation and applying of the law. Arguments are brought for updating, in certain limits, the justice – naturalistic concepts in the law.

  16. General principles of radiation protection in hospital media

    International Nuclear Information System (INIS)

    Chanteur, J.

    1993-01-01

    Principles of radiation protection given by ICRP in term of justification, optimization, limitation are applicable in hospital media. The medical act has to be justified and, in France, it is not possible to use ionizing radiations without a prescription from a doctor. The acceleration of technologies development make non radiological techniques more employed than radiologic ones, in an aim of efficiency more than an aim radiation protection. The second principle of optimization means to give medical care with the minimum of ionizing radiations for the patients as well the operators. For the principle of limitation which applied only for operators, we have the new recommends of ICRP, but it would be reasonable to give the most part of decision to the works doctor to decide if somebody has the aptitude to work at an exposed place. The last points concern the quality of equipment, the safety of installations, the organization of works which are under laws and regulations. 3 tabs

  17. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  18. First-principles study on oxidation effects in uranium oxides and high-pressure high-temperature behavior of point defects in uranium dioxide

    Science.gov (United States)

    Geng, Hua Y.; Song, Hong X.; Jin, K.; Xiang, S. K.; Wu, Q.

    2011-11-01

    Formation Gibbs free energy of point defects and oxygen clusters in uranium dioxide at high-pressure high-temperature conditions are calculated from first principles, using the LSDA+U approach for the electronic structure and the Debye model for the lattice vibrations. The phonon contribution on Frenkel pairs is found to be notable, whereas it is negligible for the Schottky defect. Hydrostatic compression changes the formation energies drastically, making defect concentrations depend more sensitively on pressure. Calculations show that, if no oxygen clusters are considered, uranium vacancy becomes predominant in overstoichiometric UO2 with the aid of the contribution from lattice vibrations, while compression favors oxygen defects and suppresses uranium vacancy greatly. At ambient pressure, however, the experimental observation of predominant oxygen defects in this regime can be reproduced only in a form of cuboctahedral clusters, underlining the importance of defect clustering in UO2+x. Making use of the point defect model, an equation of state for nonstoichiometric oxides is established, which is then applied to describe the shock Hugoniot of UO2+x. Furthermore, the oxidization and compression behavior of uranium monoxide, triuranium octoxide, uranium trioxide, and a series of defective UO2 at 0 K are investigated. The evolution of mechanical properties and electronic structures with an increase of the oxidation degree are analyzed, revealing the transition of the ground state of uranium oxides from metallic to Mott insulator and then to charge-transfer insulator due to the interplay of strongly correlated effects of 5f orbitals and the shift of electrons from uranium to oxygen atoms.

  19. Applying bioethical principles to human biomonitoring

    Directory of Open Access Journals (Sweden)

    Harrison Myron

    2008-01-01

    Full Text Available Abstract Bioethical principles are widely used as a normative framework in areas of human research and medical care. In recent years there has been increasing formalization of their use in public health decisions. The "traditional bioethical principles" are applied in this discussion to the important issue human biomonitoring for environmental exposures. They are: (1 Autonomy – Also known as the "respect for humans" principle, people understand their own best interests; (2 Beneficence – "do good" for people; (3 Nonmaleficence – "do no harm"; (4 Justice – fair distribution of benefits and costs (including risks to health across stakeholders. Some of the points made are: (1 There is not a single generic bioethical analysis applicable to the use of human biomonitoring data, each specific use requires a separate deliberation; (2 Using unidentified, population-based biomonitoring information for risk assessment or population surveillance raises fewer bioethical concerns than personally identified biomonitoring information such as employed in health screening; (3 Companies should proactively apply normative bioethical principles when considering the disposition of products and by-products in the environment and humans; (4 There is a need for more engagement by scholars on the bioethical issues raised by the use of biomarkers of exposure; (5 Though our scientific knowledge of biology will continue to increase, there will always be a role for methods or frameworks to resolve substantive disagreements in the meaning of this data that are matters of belief rather than knowledge.

  20. Applying bioethical principles to human biomonitoring

    Directory of Open Access Journals (Sweden)

    Harrison Myron

    2008-06-01

    Full Text Available Abstract Bioethical principles are widely used as a normative framework in areas of human research and medical care. In recent years there has been increasing formalization of their use in public health decisions. The "traditional bioethical principles" are applied in this discussion to the important issue human biomonitoring for environmental exposures. They are: (1 Autonomy – Also known as the "respect for humans" principle, people understand their own best interests; (2 Beneficence – "do good" for people; (3 Nonmaleficence – "do no harm"; (4 Justice – fair distribution of benefits and costs (including risks to health across stakeholders. Some of the points made are: (1 There is not a single generic bioethical analysis applicable to the use of human biomonitoring data, each specific use requires a separate deliberation; (2 Using unidentified, population-based biomonitoring information for risk assessment or population surveillance raises fewer bioethical concerns than personally identified biomonitoring information such as employed in health screening; (3 Companies should proactively apply normative bioethical principles when considering the disposition of products and by-products in the environment and humans; (4 There is a need for more engagement by scholars on the bioethical issues raised by the use of biomarkers of exposure; (5 Though our scientific knowledge of biology will continue to increase, there will always be a role for methods or frameworks to resolve substantive disagreements in the meaning of this data that are matters of belief rather than knowledge.

  1. Stability by fixed point theory for functional differential equations

    CERN Document Server

    Burton, T A

    2006-01-01

    This book is the first general introduction to stability of ordinary and functional differential equations by means of fixed point techniques. It contains an extensive collection of new and classical examples worked in detail and presented in an elementary manner. Most of this text relies on three principles: a complete metric space, the contraction mapping principle, and an elementary variation of parameters formula. The material is highly accessible to upper-level undergraduate students in the mathematical sciences, as well as working biologists, chemists, economists, engineers, mathematicia

  2. Automatic continuous dew point measurement in combustion gases

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1986-08-01

    Low exhaust temperatures serve to minimize energy consumption in combustion systems. This requires accurate, continuous measurement of exhaust condensation. An automatic dew point meter for continuous operation is described. The principle of measurement, the design of the measuring system, and practical aspects of operation are discussed.

  3. A generalization of Fermat's principle for classical and quantum systems

    Science.gov (United States)

    Elsayed, Tarek A.

    2014-09-01

    The analogy between dynamics and optics had a great influence on the development of the foundations of classical and quantum mechanics. We take this analogy one step further and investigate the validity of Fermat's principle in many-dimensional spaces describing dynamical systems (i.e., the quantum Hilbert space and the classical phase and configuration space). We propose that if the notion of a metric distance is well defined in that space and the velocity of the representative point of the system is an invariant of motion, then a generalized version of Fermat's principle will hold. We substantiate this conjecture for time-independent quantum systems and for a classical system consisting of coupled harmonic oscillators. An exception to this principle is the configuration space of a charged particle in a constant magnetic field; in this case the principle is valid in a frame rotating by half the Larmor frequency, not the stationary lab frame.

  4. A reach of the principle of entry and the principle of reliability in the real estate cadastre in our court practice

    Directory of Open Access Journals (Sweden)

    Cvetić Radenka M.

    2015-01-01

    Full Text Available Through the review of the principle of entry and the principle of reliability in the Real Estate Cadastre and their reach in our court practice, this article indicates the indispensability of compliance with these principles for the sake of legal certainty. A formidable and a complex role of the court when applying law in order to rightfully resolve an individual case has been underlined. Having regard to the accountability of the courts for the efficacy of the legal system, without any intention to disavow the court practice, some deficiencies have been pointed out, with the aim to help. An abstract manner of legal norms necessarily requires a creative role of courts in cases which cannot be easily qualified. For that reason certain deviations ought to be made followed by reasoning which unambiguously leads to the conclusion that only a specific decision which the court rendered is possible and just.

  5. Some Key Principles for Developing Grammar Skills

    Institute of Scientific and Technical Information of China (English)

    张威

    2008-01-01

    Grammar is sometimes defined aft"the way words are put together to make correct sentences"(Ur,2004,P.75).The aim of teaching grammar is to raise the rates of the correctness of language use and help the students transfer the isolated language points to apply language.In this essay,the author introduces two kinds of Conlnlon methods in English grammar class. And there are some key principles in grammar teaching.

  6. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  7. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  8. On Existence of Solutions to the Caputo Type Fractional Order Three-Point Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    B.M.B. Krushna

    2016-10-01

    Full Text Available In this paper, we establish the existence of solutions to the fractional order three-point boundary value problems by utilizing Banach contraction principle and Schaefer's fixed point theorem.

  9. Multiple active myofascial trigger points and pressure pain sensitivity maps in the temporalis muscle are related in women with chronic tension type headache.

    Science.gov (United States)

    Fernández-de-las-Peñas, César; Caminero, Ana B; Madeleine, Pascal; Guillem-Mesado, Amparo; Ge, Hong-You; Arendt-Nielsen, Lars; Pareja, Juan A

    2009-01-01

    To describe the common locations of active trigger points (TrPs) in the temporalis muscle and their referred pain patterns in chronic tension type headache (CTTH), and to determine if pressure sensitivity maps of this muscle can be used to describe the spatial distribution of active TrPs. Forty women with CTTH were included. An electronic pressure algometer was used to assess pressure pain thresholds (PPT) from 9 points over each temporalis muscle: 3 points in the anterior, medial and posterior part, respectively. Both muscles were examined for the presence of active TrPs over each of the 9 points. The referred pain pattern of each active TrP was assessed. Two-way analysis of variance detected significant differences in mean PPT levels between the measurement points (F=30.3; P<0.001), but not between sides (F=2.1; P=0.2). PPT scores decreased from the posterior to the anterior column (P<0.001). No differences were found in the number of active TrPs (F=0.3; P=0.9) between the dominant side the nondominant side. Significant differences were found in the distribution of the active TrPs (chi2=12.2; P<0.001): active TrPs were mostly found in the anterior column and in the middle of the muscle belly. The analysis of variance did not detect significant differences in the referred pain pattern between active TrPs (F=1.1, P=0.4). The topographical pressure pain sensitivity maps showed the distinct distribution of the TrPs indicated by locations with low PPTs. Multiple active TrPs in the temporalis muscle were found, particularly in the anterior column and in the middle of the muscle belly. Bilateral posterior to anterior decreased distribution of PPTs in the temporalis muscle in women with CTTH was found. The locations of active TrPs in the temporalis muscle corresponded well to the muscle areas with lower PPT, supporting the relationship between multiple active muscle TrPs and topographical pressure sensitivity maps in the temporalis muscle in women with CTTH.

  10. Three-dimensional numerical study of heat transfer characteristics of plain plate fin-and-tube heat exchangers from view point of field synergy principle

    International Nuclear Information System (INIS)

    He, Y.L.; Tao, W.Q.; Song, F.Q.; Zhang, W.

    2005-01-01

    In this paper, 3-D numerical simulations were performed for laminar heat transfer and fluid flow characteristics of plate fin-and-tube heat exchanger. The effects of five factors were examined: Re number, fin pitch, tube row number, spanwise and longitudinal tube pitch. The Reynolds number based on the tube diameter varied from 288 to 5000, the non-dimensional fin pitch based on the tube diameter varied from 0.04 to 0.5, the tube row number from 1 to 4, the spanwise tube pitch S 1 /d varies from 1.2 to 3, and the longitudinal tube pitch S 2 /d from 1.0 to 2.4. The numerical results were analyzed from the view point of field synergy principle, which says that the reduction of the intersection angle between velocity and fluid temperature gradient is the basic mechanism to enhance convective heat transfer. It is found that the effects of the five parameters on the heat transfer performance of the finned tube banks can be well described by the field synergy principle, i.e., the enhancement or deterioration of the convective heat transfer across the finned tube banks is inherently related to the variation of the intersection angle between the velocity and the fluid temperature gradient. It is also recommended that to further enhance the convective heat transfer, the enhancement techniques, such as slotting the fin, should be adopted mainly in the rear part of the fin where the synergy between local velocity and temperature gradient become worse

  11. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    Science.gov (United States)

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.

  12. Variational energy principle for compressible, baroclinic flow. 2: Free-energy form of Hamilton's principle

    Science.gov (United States)

    Schmid, L. A.

    1977-01-01

    The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.

  13. Principles of protein targeting to the nucleolus.

    Science.gov (United States)

    Martin, Robert M; Ter-Avetisyan, Gohar; Herce, Henry D; Ludwig, Anne K; Lättig-Tünnemann, Gisela; Cardoso, M Cristina

    2015-01-01

    The nucleolus is the hallmark of nuclear compartmentalization and has been shown to exert multiple roles in cellular metabolism besides its main function as the place of rRNA synthesis and assembly of ribosomes. Nucleolar proteins dynamically localize and accumulate in this nuclear compartment relative to the surrounding nucleoplasm. In this study, we have assessed the molecular requirements that are necessary and sufficient for the localization and accumulation of peptides and proteins inside the nucleoli of living cells. The data showed that positively charged peptide entities composed of arginines alone and with an isoelectric point at and above 12.6 are necessary and sufficient for mediating significant nucleolar accumulation. A threshold of 6 arginines is necessary for peptides to accumulate in nucleoli, but already 4 arginines are sufficient when fused within 15 amino acid residues of a nuclear localization signal of a protein. Using a pH sensitive dye, we found that the nucleolar compartment is particularly acidic when compared to the surrounding nucleoplasm and, hence, provides the ideal electrochemical environment to bind poly-arginine containing proteins. In fact, we found that oligo-arginine peptides and GFP fusions bind RNA in vitro. Consistent with RNA being the main binding partner for arginines in the nucleolus, we found that the same principles apply to cells from insects to man, indicating that this mechanism is highly conserved throughout evolution.

  14. Linking data repositories - an illustration of agile data curation principles through robust documentation and multiple application programming interfaces

    Science.gov (United States)

    Benedict, K. K.; Servilla, M. S.; Vanderbilt, K.; Wheeler, J.

    2015-12-01

    The growing volume, variety and velocity of production of Earth science data magnifies the impact of inefficiencies in data acquisition, processing, analysis, and sharing workflows, potentially to the point of impairing the ability of researchers to accomplish their desired scientific objectives. The adaptation of agile software development principles (http://agilemanifesto.org/principles.html) to data curation processes has significant potential to lower barriers to effective scientific data discovery and reuse - barriers that otherwise may force the development of new data to replace existing but unusable data, or require substantial effort to make data usable in new research contexts. This paper outlines a data curation process that was developed at the University of New Mexico that provides a cross-walk of data and associated documentation between the data archive developed by the Long Term Ecological Research (LTER) Network Office (PASTA - http://lno.lternet.edu/content/network-information-system) and UNM's institutional repository (LoboVault - http://repository.unm.edu). The developed automated workflow enables the replication of versioned data objects and their associated standards-based metadata between the LTER system and LoboVault - providing long-term preservation for those data/metadata packages within LoboVault while maintaining the value-added services that the PASTA platform provides. The relative ease with which this workflow was developed is a product of the capabilities independently developed on both platforms - including the simplicity of providing a well-documented application programming interface (API) for each platform enabling scripted interaction and the use of well-established documentation standards (EML in the case of PASTA, Dublin Core in the case of LoboVault) by both systems. These system characteristics, when combined with an iterative process of interaction between the Data Curation Librarian (on the LoboVault side of the process

  15. Electrical and electronic principles and technology

    CERN Document Server

    John Bird

    2013-01-01

    This much-loved textbook introduces electrical and electronic principles and technology to students who are new to the subject. Real-world situations and engineering examples put the theory into context. The inclusion of worked problems with solutions really help aid your understanding and further problems then allow you to test and confirm you have mastered each subject. In total the books contains 410 worked problems, 540 further problems, 340 multiple-choice questions, 455 short-answer questions, and 7 revision tests with answers online.This an ideal text for vocational courses enabling a s

  16. Multiple mononeuropathy

    Science.gov (United States)

    ... with multiple mononeuropathy are prone to new nerve injuries at pressure points such as the knees and elbows. They should avoid putting pressure on these areas, for example, by not leaning on the elbows, crossing the knees, ...

  17. Association of a novel point mutation in MSH2 gene with familial multiple primary cancers

    Directory of Open Access Journals (Sweden)

    Hai Hu

    2017-10-01

    Full Text Available Abstract Background Multiple primary cancers (MPC have been identified as two or more cancers without any subordinate relationship that occur either simultaneously or metachronously in the same or different organs of an individual. Lynch syndrome is an autosomal dominant genetic disorder that increases the risk of many types of cancers. Lynch syndrome patients who suffer more than two cancers can also be considered as MPC; patients of this kind provide unique resources to learn how genetic mutation causes MPC in different tissues. Methods We performed a whole genome sequencing on blood cells and two tumor samples of a Lynch syndrome patient who was diagnosed with five primary cancers. The mutational landscape of the tumors, including somatic point mutations and copy number alternations, was characterized. We also compared Lynch syndrome with sporadic cancers and proposed a model to illustrate the mutational process by which Lynch syndrome progresses to MPC. Results We revealed a novel pathologic mutation on the MSH2 gene (G504 splicing that associates with Lynch syndrome. Systematical comparison of the mutation landscape revealed that multiple cancers in the proband were evolutionarily independent. Integrative analysis showed that truncating mutations of DNA mismatch repair (MMR genes were significantly enriched in the patient. A mutation progress model that included germline mutations of MMR genes, double hits of MMR system, mutations in tissue-specific driver genes, and rapid accumulation of additional passenger mutations was proposed to illustrate how MPC occurs in Lynch syndrome patients. Conclusion Our findings demonstrate that both germline and somatic alterations are driving forces of carcinogenesis, which may resolve the carcinogenic theory of Lynch syndrome.

  18. Motor synergies and the equilibrium-point hypothesis.

    Science.gov (United States)

    Latash, Mark L

    2010-07-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.

  19. Dry storage of spent nuclear fuel: present principles

    International Nuclear Information System (INIS)

    Vapirev, E.; Christoskov, I.; Boyadjiev, Z.

    1998-01-01

    The basic principles for the dry storage of spent nuclear fuel are presented in accordance to the author's understanding. The are: 1) Storage in the air at a low temperature (below 200 o C) or in a inert atmosphere (nitrogen, helium) at a temperature up to 300-400 o C; 2) Passive cooling by air; 3) Multiple barriers to the propagation of fission products and trans-uraniums: fuel palette, fuel pin cladding, a containment or a canister, a single or a double cover of the container; 4) Control of the condition of the atmosphere within the double cover - pressure monitoring, helium concentration monitoring (if the atmosphere in the container is of helium or contains traces of helium). Based on publications, observations and discussion during the recent years, several principles are propose for discussion. It is proposed: 4) Stored fuel must be regarded as defective; 5) Active control of the integrity of the protective barriers of of the composition of the storage atmosphere - principle of the 'control barrier' or the 'control atmosphere'; 6) Introduction of the procedure of 'check up of the condition of SNF' by visual control or sampling of the storage atmosphere for the technologies which do not provide for monitoring the integrity of barriers or of the storage atmosphere. Principle 4 is being gradually accepted in modern technologies. Principle 5 is observed in the double-purpose containers and in some of MVDS technologies. A common feature of the technologies of horizontal and vertical canister storage in concrete modules is the absence of control of the integrity of barriers or of the composition of the atmosphere. To these technologies, if they are not revised, principle 6 applies

  20. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    Science.gov (United States)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  1. Zero-point energy and the Eoetvoes experiment

    International Nuclear Information System (INIS)

    Ross, D.K.

    1999-01-01

    The paper shows that the modification of the electromagnetic zero-point energy inside a solid aluminum ball ia large enough to be detected by a feasible Eoetvoes-type experiment improved only a factor of 100 over earlier experiments. Because of the uncertainties surrounding the relationship of the zero-point energy to the cosmological constant and to renormalization effects in general relativity that such an experiment might give a non-null result. This would be a test of the weak equivalence principle and of general relativity itself in regard to a very special purely quantum-mechanical form of energy

  2. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  3. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change-points in the tail of multiple time series data. In addition, we adapt existing tail change-point detection methods to our specific problem and conduct a thorough comparison of different methods in terms of performance on the estimation of change-points and computational time. We also examine three locations on the U.S. northeast coast and demonstrate that the methods are useful for identifying changes in seasonally extreme warm temperatures.

  4. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    Science.gov (United States)

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  5. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring.

    Science.gov (United States)

    van Gastel, Mark; Stuijk, Sander; de Haan, Gerard

    2016-12-07

    Finger-oximeters are ubiquitously used for patient monitoring in hospitals worldwide. Recently, remote measurement of arterial blood oxygenation (SpO 2 ) with a camera has been demonstrated. Both contact and remote measurements, however, require the subject to remain static for accurate SpO 2 values. This is due to the use of the common ratio-of-ratios measurement principle that measures the relative pulsatility at different wavelengths. Since the amplitudes are small, they are easily corrupted by motion-induced variations. We introduce a new principle that allows accurate remote measurements even during significant subject motion. We demonstrate the main advantage of the principle, i.e. that the optimal signature remains the same even when the SNR of the PPG signal drops significantly due to motion or limited measurement area. The evaluation uses recordings with breath-holding events, which induce hypoxemia in healthy moving subjects. The events lead to clinically relevant SpO 2 levels in the range 80-100%. The new principle is shown to greatly outperform current remote ratio-of-ratios based methods. The mean-absolute SpO 2 -error (MAE) is about 2 percentage-points during head movements, where the benchmark method shows a MAE of 24 percentage-points. Consequently, we claim ours to be the first method to reliably measure SpO 2 remotely during significant subject motion.

  6. Variational principle in quantum mechanics

    International Nuclear Information System (INIS)

    Popiez, L.

    1986-01-01

    The variational principle in a standard, path integral formulation of quantum mechanics (as proposed by Dirac and Feynman) appears only in the context of a classical limit n to 0 and manifests itself through the method of abstract stationary phase. Symbolically it means that a probability amplitude averaged over trajectories denotes a classical evolution operator for points in a configuration space. There exists, however, the formulation of quantum dynamics in which variational priniple is one of basic postulates. It is explained that the translation between stochastic and quantum mechanics in this case can be understood as in Nelson's stochastic mechanics

  7. Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery.

    Science.gov (United States)

    Park, Jae Hyun

    2015-12-01

    Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery.

  8. Double meanings will not save the principle of double effect.

    Science.gov (United States)

    Douglas, Charles D; Kerridge, Ian H; Ankeny, Rachel A

    2014-06-01

    In an article somewhat ironically entitled "Disambiguating Clinical Intentions," Lynn Jansen promotes an idea that should be bewildering to anyone familiar with the literature on the intention/foresight distinction. According to Jansen, "intention" has two commonsense meanings, one of which is equivalent to "foresight." Consequently, questions about intention are "infected" with ambiguity-people cannot tell what they mean and do not know how to answer them. This hypothesis is unsupported by evidence, but Jansen states it as if it were accepted fact. In this reply, we make explicit the multiple misrepresentations she has employed to make her hypothesis seem plausible. We also point out the ways in which it defies common sense. In particular, Jansen applies her thesis only to recent empirical research on the intentions of doctors, totally ignoring the widespread confusion that her assertion would imply in everyday life, in law, and indeed in religious and philosophical writings concerning the intention/foresight distinction and the Principle of Double Effect. © The Author 2014. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Making Marketing Principles Tangible: Online Auctions as Living Case Studies

    Science.gov (United States)

    Wood, Charles M.; Suter, Tracy A.

    2004-01-01

    This article presents an effective course supplement for Principles of Marketing classes. An experiential project involving online auctions is offered to instructors seeking to create a more participatory student environment and an interactive teaching style. A number of learning points are illustrated that allow instructors to use an auction…

  10. The twin paradox and the principle of relativity

    International Nuclear Information System (INIS)

    Grøn, Øyvind

    2013-01-01

    The twin paradox is intimately related to the principle of relativity. Two twins A and B meet, travel away from each other and meet again. From the point of view of A, B is the traveller. Thus, A predicts B to be younger than A herself, and vice versa. Both cannot be correct. The special relativistic solution is to say that if one of the twins, say A, was inertial during the separation, she will be the older one. Since the principle of relativity is not valid for accelerated motion according to the special theory of relativity B cannot consider herself as at rest permanently because she must accelerate in order to return to her sister. A general relativistic solution is to say that due to the principle of equivalence B can consider herself as at rest, but she must invoke the gravitational change of time in order to predict correctly the age of A during their separation. However one may argue that the fact that B is younger than A shows that B was accelerated, not A, and hence the principle of relativity is not valid for accelerated motion in the general theory of relativity either. I here argue that perfect inertial dragging may save the principle of relativity, and that this requires a new model of the Minkowski spacetime where the cosmic mass is represented by a massive shell with radius equal to its own Schwarzschild radius. (paper)

  11. The point of 6 sigma

    International Nuclear Information System (INIS)

    An, Yeong Jin

    2000-07-01

    This book gives descriptions of the point of 6 sigma. These are the titles of this : what 6 sigma is, sigma conception, motor roller 3.4 ppm, centering error, 6 sigma purpose, 6 sigma principle, eight steps of innovation strategy, 6 sigma innovation strategy of easy system step, measurement standard of 6 sigma outcome, the main role of 6 sigma, acknowledgment and reword, 6 sigma characteristic, 6 sigma effect, 6 sigma application and problems which happen when 6 sigma introduces.

  12. Mind map our way into effective student questioning: A principle-based scenario

    NARCIS (Netherlands)

    Stokhof, Harry; De Vries, Bregje; Bastiaens, Theo; Martens, Rob

    2017-01-01

    Student questioning is an important self-regulative strategy and has multiple benefits for teaching and learning science. Teachers, however, need support to align student questioning to curricular goals. This study tests a prototype of a principle-based scenario that supports teachers in guiding

  13. Eta Carinae: Viewed from Multiple Vantage Points

    Science.gov (United States)

    Gull, Theodore

    2007-01-01

    The central source of Eta Carinae and its ejecta is a massive binary system buried within a massive interacting wind structure which envelops the two stars. However the hot, less massive companion blows a small cavity in the very massive primary wind, plus ionizes a portion of the massive wind just beyond the wind-wind boundary. We gain insight on this complex structure by examining the spatially-resolved Space Telescope Imaging Spectrograph (STIS) spectra of the central source (0.1") with the wind structure which extends out to nearly an arcsecond (2300AU) and the wind-blown boundaries, plus the ejecta of the Little Homunculus. Moreover, the spatially resolved Very Large Telescope/UltraViolet Echelle Spectrograph (VLT/UVES) stellar spectrum (one arcsecond) and spatially sampled spectra across the foreground lobe of the Homunculus provide us vantage points from different angles relative to line of sight. Examples of wind line profiles of Fe II, and the.highly excited [Fe III], [Ne III], [Ar III] and [S III)], plus other lines will be presented.

  14. First principles calculations of interstitial and lamellar rhenium nitrides

    International Nuclear Information System (INIS)

    Soto, G.; Tiznado, H.; Reyes, A.; Cruz, W. de la

    2012-01-01

    Highlights: ► The possible structures of rhenium nitride as a function of composition are analyzed. ► The alloying energy is favorable for rhenium nitride in lamellar arrangements. ► The structures produced by magnetron sputtering are metastable variations. ► The structures produced by high-pressure high-temperature are stable configurations. ► The lamellar structures are a new category of interstitial dissolutions. - Abstract: We report here a systematic first principles study of two classes of variable-composition rhenium nitride: i, interstitial rhenium nitride as a solid solution and ii, rhenium nitride in lamellar structures. The compounds in class i are cubic and hexagonal close-packed rhenium phases, with nitrogen in the octahedral and tetrahedral interstices of the metal, and they are formed without changes to the structure, except for slight distortions of the unit cells. In the compounds in class ii, by contrast, the nitrogen inclusion provokes stacking faults in the parent metal structure. These faults create trigonal-prismatic sites where the nitrogen residence is energetically favored. This second class of compounds produces lamellar structures, where the nitrogen lamellas are inserted among multiple rhenium layers. The Re 3 N and Re 2 N phases produced recently by high-temperature and high-pressure synthesis belong to this class. The ratio of the nitrogen layers to the rhenium layers is given by the composition. While the first principle calculations point to higher stability for the lamellar structures as opposed to the interstitial phases, the experimental evidence presented here demonstrates that the interstitial classes are synthesizable by plasma methods. We conclude that rhenium nitrides possess polymorphism and that the two-dimensional lamellar structures might represent an emerging class of materials within binary nitride chemistry.

  15. Capital taxation : principles , properties and optimal taxation issues

    OpenAIRE

    Antonin, Céline; Touze, Vincent

    2017-01-01

    This article addresses the issue of capital taxation relying on three levels of analysis. The first level deals with the multiple ways to tax capital (income or value, proportional or progressive taxation, and the temporality of the taxation) and presents some of France's particular features within a heterogeneous European context. The second area of investigation focuses on the main dynamic properties generated by capital taxation: the principle of equivalence with a tax on consu...

  16. A Survivable Wavelength Division Multiplexing Passive Optical Network with Both Point-to-Point Service and Broadcast Service Delivery

    Science.gov (United States)

    Ma, Xuejiao; Gan, Chaoqin; Deng, Shiqi; Huang, Yan

    2011-11-01

    A survivable wavelength division multiplexing passive optical network enabling both point-to-point service and broadcast service is presented and demonstrated. This architecture provides an automatic traffic recovery against feeder and distribution fiber link failure, respectively. In addition, it also simplifies the protection design for multiple services transmission in wavelength division multiplexing passive optical networks.

  17. Secure Multiparty Quantum Computation for Summation and Multiplication.

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-21

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.

  18. Basic principles of medical aid in cases of radiation accidents

    International Nuclear Information System (INIS)

    Andreev, E.; Mikhajlov, M.A.; Bliznakov, V.

    1979-01-01

    A model scheme has been presented of medical aid organization in emergency cases of irradiation. The tasks of medical service have been pointed out in connection with evacuation stages, bulk of medical aid depending on the natur of radiation damages, first aid and some general principles of radiation sickness treatment. (author)

  19. Basic and energy physics: the multiple faces of energy; Physique fondamentale et energetique: les multiples visages de l'energie

    Energy Technology Data Exchange (ETDEWEB)

    Balian, R. [Academie des Sciences, 75 - Paris (France)

    2001-07-01

    After an historical presentation of the elaboration of the energy concept, this document recalls, first, the basic physical principles linked with this concept: first and second principle of thermodynamics, dynamics of irreversible processes, hierarchy of elementary interactions. Then, their consequences on energy problems are examined by comparing the different common types of energy from different points of view: concentration, degradation, transport, storage, reserves and harmful effects. These comparisons rely on the characteristic values of the data involved. (J.S.)

  20. DIAGNOSTIC FEATURES RESEARCH OF AC ELECTRIC POINT MOTORS

    Directory of Open Access Journals (Sweden)

    S. YU. Buryak

    2014-05-01

    Full Text Available Purpose.Considerable responsibility for safety of operation rests on signal telephone and telegraph department of railway. One of the most attackable nodes (both automation systems, and railway in whole is track switches. The aim of this investigation is developing such system for monitoring and diagnostics of track switches, which would fully meet the requirements of modern conditions of high-speed motion and heavy trains and producing diagnostics, collection and systematization of data in an automated way. Methodology. In order to achieve the desired objectives research of a structure and the operating principle description of the switch electric drive, sequence of triggering its main units were carried out. The operating characteristics and settings, operating conditions, the causes of failures in the work, andrequirements for electric drives technology and their service were considered and analyzed. Basic analysis principles of dependence of nature of the changes the current waveform, which flows in the working circuit of AC electric point motor were determined. Technical implementation of the monitoring and diagnosing system the state of AC electric point motors was carried out. Findings. Signals taken from serviceable and defective electric turnouts were researched. Originality. Identified a strong interconnectionbetween the technical condition of the track switchand curve shape that describes the current in the circuit of AC electric point motor during operation which is based on the research processes that have influence on it during operation. Practical value. Shown the principles of the technical approach to the transition from scheduled preventive maintenance to maintenance of real condition for a more objective assessment and thus more rapid response to emerging or failures when they occur gradually, damages and any other shortcomings in the work track switch AC drives.

  1. Consensus principles for wound care research obtained using a Delphi process.

    Science.gov (United States)

    Serena, Thomas; Bates-Jensen, Barbara; Carter, Marissa J; Cordrey, Renee; Driver, Vickie; Fife, Caroline E; Haser, Paul B; Krasner, Diane; Nusgart, Marcia; Smith, Adrianne P S; Snyder, Robert J

    2012-01-01

    Too many wound care research studies are poorly designed, badly executed, and missing crucial data. The objective of this study is to create a series of principles for all stakeholders involved in clinical or comparative effectiveness research in wound healing. The Delphi approach was used to reach consensus, using a web-based survey for survey participants and face-to-face conferences for expert panel members. Expert panel (11 members) and 115 wound care researchers (respondents) drawn from 15 different organizations. Principles were rated for validity using 5-point Likert scales and comments. A 66% response rate was achieved in the first Delphi round from the 173 invited survey participants. The response rate for the second Delphi round was 46%. The most common wound care researcher profile was age 46-55 years, a wound care clinic setting, and >10 years' wound care research and clinical experience. Of the initial 17 principles created by the panel, only four principles were not endorsed in Delphi round 1 with another four not requiring revision. Of the 14 principles assessed by respondents in the second Delphi round, only one principle was not endorsed and it was revised; four other principles also needed revision based on the use of specific words or contextual use. Of the 19 final principles, three included detailed numbered lists. With the wide variation in design, conduct, and reporting of wound care research studies, it is hoped that these principles will improve the standard and practice of care in this field. © 2012 by the Wound Healing Society.

  2. Online evaluation of point-of-interest recommendation systems

    NARCIS (Netherlands)

    Dean-Hall, A.; Clarke, C.L.A.; Kamps, J.; Kiseleva, J.

    2015-01-01

    In this work we describe a system to evaluate multiple point- of-interest recommendation systems. In this system each recommendation service will be exposed online and crowd-sourced assessors will interact with merged results from multiple services, which are responding to suggestion requests live,

  3. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  4. Generalized correlation of latent heats of vaporization of coal liquid model compounds between their freezing points and critical points

    Energy Technology Data Exchange (ETDEWEB)

    Sivaraman, A.; Kobuyashi, R.; Mayee, J.W.

    1984-02-01

    Based on Pitzer's three-parameter corresponding states principle, the authors have developed a correlation of the latent heat of vaporization of aromatic coal liquid model compounds for a temperature range from the freezing point to the critical point. An expansion of the form L = L/sub 0/ + ..omega..L /sub 1/ is used for the dimensionless latent heat of vaporization. This model utilizes a nonanalytic functional form based on results derived from renormalization group theory of fluids in the vicinity of the critical point. A simple expression for the latent heat of vaporization L = D/sub 1/epsilon /SUP 0.3333/ + D/sub 2/epsilon /SUP 0.8333/ + D/sub 4/epsilon /SUP 1.2083/ + E/sub 1/epsilon + E/sub 2/epsilon/sup 2/ + E/sub 3/epsilon/sup 3/ is cast in a corresponding states principle correlation for coal liquid compounds. Benzene, the basic constituent of the functional groups of the multi-ring coal liquid compounds, is used as the reference compound in the present correlation. This model works very well at both low and high reduced temperatures approaching the critical point (0.02 < epsilon = (T /SUB c/ - T)/(T /SUB c/- 0.69)). About 16 compounds, including single, two, and three-ring compounds, have been tested and the percent root-mean-square deviations in latent heat of vaporization reported and estimated through the model are 0.42 to 5.27%. Tables of the coefficients of L/sub 0/ and L/sub 1/ are presented. The contributing terms of the latent heat of vaporization function are also presented in a table for small increments of epsilon.

  5. The Principles of Organization of Internal Control of Companies

    Directory of Open Access Journals (Sweden)

    Panteleiev

    2017-02-01

    Full Text Available The question to give a convincing assessment of the Rules of the internal controls and present them in the form of principles of organization of internal control. Since the final decision in the form of an exhaustive list and universal principles of internal control is no scientific research continues these principles. It is necessary to examine the logic of the main provisions of the organization's internal control due consideration of the requirements of international practice controls and propose principles of internal control. For this purpose, conducted a critical analysis of the main provisions of internal controls contained in the publications and in the regulations on internal control proceedings. Synthesis of articles of authors of lead scientific and practical journal Ukraine "Accounting and Auditing" for the 1994-2015 biennium pointed out that despite the coverage of current journal articles provisions, principles, requirements, rules and other modern techniques. On internal control and comprehensive set of reasonable response to the organization and classification of internal control is absent. This leads to continue the search key components of the organization's internal control. The above basic concepts in publications on the organization of internal control were used in the formation of a set of principles of control. According to the requirements of the OECD field of internal control includes risk management, cost control, change in control, complete information to guarantee the effectiveness of internal control systems, providing the ability to make decisions and so on. COSO model contains an exhaustive list of five components. The results of the questionnaire trainees have provided compelling arguments for classifying and establishing relevant principles of organization and internal controls were rejected. A list of the basic principles of the organization of internal control, which consists of 25 elements that are shaped

  6. Ergodic theory and dynamical systems from a physical point of view

    International Nuclear Information System (INIS)

    Sabbagan, M.; Nasertayoob, P.

    2008-01-01

    Ergodic theory and a large part of dynamical systems are in essence some mathematical modeling, which belongs to statistical physics. This paper is an attempt to present some of the results and principles in ergodic theory and dynamical systems from certain view points of physics such as thermodynamics and classical mechanics. The significance of the varational principle in the statistical physics, the relation between classical approach and statistical approach, also comparison between reversibility from statistical of view are discussed. (author)

  7. Forces, surface finish and friction characteristics in surface engineered single- and multiple-point cutting edges

    International Nuclear Information System (INIS)

    Sarwar, M.; Gillibrand, D.; Bradbury, S.R.

    1991-01-01

    Advanced surface engineering technologies (physical and chemical vapour deposition) have been successfully applied to high speed steel and carbide cutting tools, and the potential benefits in terms of both performance and longer tool life, are now well established. Although major achievements have been reported by many manufacturers and users, there are a number of applications where surface engineering has been unsuccessful. Considerable attention has been given to the film characteristics and the variables associated with its properties; however, very little attention has been directed towards the benefits to the tool user. In order to apply surface engineering technology effectively to cutting tools, the coater needs to have accurate information relating to cutting conditions, i.e. cutting forces, stress and temperature etc. The present paper describes results obtained with single- and multiple-point cutting tools with examples of failures, which should help the surface coater to appreciate the significance of the cutting conditions, and in particular the magnitude of the forces and stresses present during cutting processes. These results will assist the development of a systems approach to cutting tool technology and surface engineering with a view to developing an improved product. (orig.)

  8. Expanding Uncertainty Principle to Certainty-Uncertainty Principles with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2015-03-01

    Full Text Available The most famous contribution of Heisenberg is uncertainty principle. But the original uncertainty principle is improper. Considering all the possible situations (including the case that people can create laws and applying Neutrosophy and Quad-stage Method, this paper presents "certainty-uncertainty principles" with general form and variable dimension fractal form. According to the classification of Neutrosophy, "certainty-uncertainty principles" can be divided into three principles in different conditions: "certainty principle", namely a particle’s position and momentum can be known simultaneously; "uncertainty principle", namely a particle’s position and momentum cannot be known simultaneously; and neutral (fuzzy "indeterminacy principle", namely whether or not a particle’s position and momentum can be known simultaneously is undetermined. The special cases of "certain ty-uncertainty principles" include the original uncertainty principle and Ozawa inequality. In addition, in accordance with the original uncertainty principle, discussing high-speed particle’s speed and track with Newton mechanics is unreasonable; but according to "certaintyuncertainty principles", Newton mechanics can be used to discuss the problem of gravitational defection of a photon orbit around the Sun (it gives the same result of deflection angle as given by general relativity. Finally, for the reason that in physics the principles, laws and the like that are regardless of the principle (law of conservation of energy may be invalid; therefore "certaintyuncertainty principles" should be restricted (or constrained by principle (law of conservation of energy, and thus it can satisfy the principle (law of conservation of energy.

  9. Criteria and principles for environmental assessment of disposal of radioactive wastes

    International Nuclear Information System (INIS)

    Hill, M.D.

    1989-01-01

    This paper describes the criteria which are used in judging whether methods for the disposal of radioactive wastes are acceptable, from a radiological protection point of view, and the principles used in assessing the radiological impact of waste disposal methods. Gaseous, liquid and solid wastes are considered, and the discussion is relevant to wastes arising from the nuclear power industry, and from medical practices, general industry and research. Throughout the paper, emphasis is given to the general criteria and principles recommended by international organizations rather than to the detailed legislative and regulatory requirements in particular countries

  10. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  11. PRINCIPLES AND PROCEDURES ON FISCAL

    Directory of Open Access Journals (Sweden)

    Morar Ioan Dan

    2011-07-01

    Full Text Available Fiscal science advertise in most analytical situations, while the principles reiterated by specialists in the field in various specialized works The two components of taxation, the tax system relating to the theoretical and the practical procedures relating to tax are marked by frequent references and invocations of the underlying principles to tax. This paper attempts a return on equity fiscal general vision as a principle often invoked and used to justify tax policies, but so often violated the laws fiscality . Also want to emphasize the importance of devising procedures to ensure fiscal equitable treatment of taxpayers. Specific approach of this paper is based on the notion that tax equity is based on equality before tax and social policies of the executive that would be more effective than using the other tax instruments. I want to emphasize that if the scientific approach to justify the unequal treatment of the tax law is based on the various social problems of the taxpayers, then deviates from the issue of tax fairness justification explaining the need to promote social policies usually more attractive to taxpayers. Modern tax techniques are believed to be promoted especially in order to ensure an increasing level of high efficiency at the expense of the taxpayers obligations to ensure equality before the law tax. On the other hand, tax inequities reaction generates multiple recipients from the first budget plan, but finalities unfair measures can not quantify and no timeline for the reaction, usually not known. But while statistics show fluctuations in budgetary revenues and often find in literature reviews and analysis relevant to a connection between changes in government policies, budget execution and outcome. The effects of inequality on tax on tax procedures and budgetary revenues are difficult to quantify and is among others to this work. Providing tax equity without combining it with the principles of discrimination and neutrality

  12. Multiple Discriminations – between a Contravention Per Se and an Aggravating Circumstances

    Directory of Open Access Journals (Sweden)

    Cristian Jura

    2011-05-01

    Full Text Available There are some references on multiple discrimination like in the Recital 14 of the Racial Equality Directive, 2000/43/EC: „In implementing the principle of equal treatment irrespective of racial or ethnic origin, the Community should, in accordance with Article 3(2 of the EC Treaty, aim to eliminate inequalities, and to promote equality between men and women, especially since women are often the victims of multiple discrimination”. Even in this case there is no legal definition of multiple discriminations.In 2008, so after 8 years, in the Explanatory Memorandum of the Proposal for a Council Directive on implementing the principle of equal treatment between persons irrespective of religion or belief, disability, age or sexual orientation there is a reference regarding multiple discrimination in the sense that „Attention was also drawn to the need to tackle multiple discrimination, for example by defining it as discrimination and by providing effective remedies. These issues go beyond the scope of this Directive but nothing prevents Member States taking action in these areas.”

  13. The application of the ALARA principle: a first assessment

    International Nuclear Information System (INIS)

    Lochard, J.; Webb, G.A.M.

    1984-01-01

    What can we learn from practical ALARA studies. On the methodological point of view the different works developed these last years rely upon the two basic ICRP assumptions, i.e.: the linearity of the dose-risk relationship and the optimal resource allocation principle. Furthermore, most authors seem now to assume the multidimensionality of radiological protection problems. This leads to envisage for decision-aiding techniques the enlargement of the cost-benefit method. As for the quantification process of ALARA principle, assessment techniques for both protection costs and levels of exposures of the various groups at risk are well established. Some differences can be however pointed out between authors concerning the monetary valuation of the detriment cost as well as the way to cope with time, probabilistic events and ''other factors'' not directly linked to protection costs and radiological detriment. Further consideration must be given to the exact role ALARA studies have to play in the decision-making process concerning conception and operation of installations. Particular efforts have to be put on the development of adequate data bases and also decision-aiding techniques able to take into account the multidimensionnality of factors involved in practical protection [fr

  14. The Relative Performance of Female and Male Students in Accounting Principles Classes.

    Science.gov (United States)

    Bouillon, Marvin L.; Doran, B. Michael

    1992-01-01

    The performance of female and male students in Accounting Principles (AP) I and II was compared by using multiple regression techniques to assess the incremental explanatory effects of gender. Males significantly outperformed females in AP I, contradicting earlier studies. Similar gender of instructor and student was insignificant. (JOW)

  15. A Life Below the Threshold?: Examining Conflict Between Ethical Principles and Parental Values in Neonatal Treatment Decision Making.

    Science.gov (United States)

    Cunningham, Thomas V

    2016-01-01

    Three common ethical principles for establishing the limits of parental authority in pediatric treatment decision-making are the harm principle, the principle of best interest, and the threshold view. This paper considers how these principles apply to a case of a premature neonate with multiple significant co-morbidities whose mother wanted all possible treatments, and whose health care providers wondered whether it would be ethically permissible to allow him to die comfortably despite her wishes. Whether and how these principles help in understanding what was morally right for the child is questioned. The paper concludes that the principles were of some value in understanding the moral geography of the case; however, this case reveals that common bioethical principles for medical decision-making are problematically value-laden because they are inconsistent with the widespread moral value of medical vitalism.

  16. Relativistic dynamics of point magnetic moment

    Science.gov (United States)

    Rafelski, Johann; Formanek, Martin; Steinmetz, Andrew

    2018-01-01

    The covariant motion of a classical point particle with magnetic moment in the presence of (external) electromagnetic fields is revisited. We are interested in understanding extensions to the Lorentz force involving point particle magnetic moment (Stern-Gerlach force) and how the spin precession dynamics is modified for consistency. We introduce spin as a classical particle property inherent to Poincaré symmetry of space-time. We propose a covariant formulation of the magnetic force based on a `magnetic' 4-potential and show how the point particle magnetic moment relates to the Amperian (current loop) and Gilbertian (magnetic monopole) descriptions. We show that covariant spin precession lacks a unique form and discuss the connection to g-2 anomaly. We consider the variational action principle and find that a consistent extension of the Lorentz force to include magnetic spin force is not straightforward. We look at non-covariant particle dynamics, and present a short introduction to the dynamics of (neutral) particles hit by a laser pulse of arbitrary shape.

  17. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  18. Principles of a new treatment algorithm in multiple sclerosis

    DEFF Research Database (Denmark)

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...... considered for approval. With the arrival of these oral agents, a key question is where they may fit into the existing MS treatment algorithm. This article aims to help answer this question by analyzing the trial data for the new oral therapies, as well as for existing MS treatments, by applying practical...... clinical experience, and through consideration of our increased understanding of how to define treatment success in MS. This article also provides a speculative look at what the treatment algorithm may look like in 5 years, with the availability of new data, greater experience and, potentially, other novel...

  19. Dimensional cosmological principles

    International Nuclear Information System (INIS)

    Chi, L.K.

    1985-01-01

    The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle

  20. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    A musical analysis represents a particular way of understanding certain aspects of the structure of a piece of music. The quality of an analysis can be evaluated to some extent by the degree to which knowledge of it improves performance on tasks such as mistake spotting, memorising a piece...... as the minimum description length principle and relates closely to certain ideas in the theory of Kolmogorov complexity. Inspired by this general principle, the hypothesis explored in this paper is that the best ways of understanding (or explanations for) a piece of music are those that are represented...... by the shortest possible descriptions of the piece. With this in mind, two compression algorithms are presented, COSIATEC and SIATECCompress. Each of these algorithms takes as input an in extenso description of a piece of music as a set of points in pitch-time space representing notes. Each algorithm...

  1. Unbounded critical points for a class of lower semicontinuous functionals

    OpenAIRE

    Pellacci, Benedetta; Squassina, Marco

    2003-01-01

    In this paper we prove existence and multiplicity results of unbounded critical points for a general class of weakly lower semicontinuous functionals. We will apply a suitable nonsmooth critical point theory.

  2. Some principles of Islamic ethics as found in Harrisian philosophy.

    Science.gov (United States)

    Aksoy, Sahin

    2010-04-01

    John Harris is one of the prominent philosophers and bioethicists of our time. He has published tens of books and hundreds of papers throughout his professional life. This paper aims to take a 'deep-look' at Harris' works to argue that it is possible to find some principles of Islamic ethics in Harrisian philosophy, namely in his major works, as well as in his personal life. This may be surprising, or thought of as a 'big' and 'groundless' claim, since John Harris has nothing to do with any religion in his intellectual works. The major features of Harrisian philosophy could be defined as consequentialism or utilitarianism with liberal overtones. Despite some significant and fundamental differences in the application of principles (ie, abortion, euthanasia), the similarities between the major principles in Harrisian philosophy and Islamic ethics are greater at some points than the similarities between Islamic ethics and some other religious ethics (ie, Christian, Judaism). In this study I compare Harrisian teachings with major Islamic principles on 'Responsibility', 'Side-effects and Double-effects', 'Equality', 'Vicious choice, guilt and innocence', 'Organ transplantation and property rights' and 'Advance directives'.

  3. Electron interaction and spin effects in quantum wires, quantum dots and quantum point contacts: a first-principles mean-field approach

    International Nuclear Information System (INIS)

    Zozoulenko, I V; Ihnatsenka, S

    2008-01-01

    We have developed a mean-field first-principles approach for studying electronic and transport properties of low dimensional lateral structures in the integer quantum Hall regime. The electron interactions and spin effects are included within the spin density functional theory in the local density approximation where the conductance, the density, the effective potentials and the band structure are calculated on the basis of the Green's function technique. In this paper we present a systematic review of the major results obtained on the energetics, spin polarization, effective g factor, magnetosubband and edge state structure of split-gate and cleaved-edge overgrown quantum wires as well as on the conductance of quantum point contacts (QPCs) and open quantum dots. In particular, we discuss how the spin-resolved subband structure, the current densities, the confining potentials, as well as the spin polarization of the electron and current densities in quantum wires and antidots evolve when an applied magnetic field varies. We also discuss the role of the electron interaction and spin effects in the conductance of open systems focusing our attention on the 0.7 conductance anomaly in the QPCs. Special emphasis is given to the effect of the electron interaction on the conductance oscillations and their statistics in open quantum dots as well as to interpretation of the related experiments on the ultralow temperature saturation of the coherence time in open dots

  4. The principle of optimisation: reasons for success and legal criticism

    International Nuclear Information System (INIS)

    Fernandez Regalado, Luis

    2008-01-01

    The International Commission on Radiological Protection (ICRP) has adopted new recommendations in 2007. In broad outlines they fundamentally continue the recommendations already approved in 1990 and later on. The principle of optimisation of protection, together with the principles of justification and dose limits, remains playing a key role of the ICRP recommendations, and it has so been for the last few years. This principle, somehow reinforced in the 2007 ICRP recommendations, has been incorporated into norms and legislation which have peacefully been in force in many countries all over the world. There are three main reasons to explain the success in the application of the principle of optimisation in radiological protection: First, the subjectivity of the sentence that embraces the principle of optimisation, 'As low as reasonably achievable' (ALARA), that allows different valid interpretations under different circumstances. Second, the pragmatism and adaptability of ALARA to all exposure situations. And third, the scientific humbleness which is behind the principle of optimisation, which makes a clear contrast with the old fashioned scientific positivism that enshrined scientist opinions. Nevertheless, from a legal point of view, there is some criticism cast over the principle of optimisation in radiological protection, where it has been transformed in compulsory norm. This criticism is based on two arguments: The lack of democratic participation in the process of elaboration of the norm, and the legal uncertainty associated to its application. Both arguments are somehow known by the ICRP which, on the one hand, has broadened the participation of experts, associations and the professional radiological protection community, increasing the transparency on how decisions on recommendations have been taken, and on the other hand, the ICRP has warned about the need for authorities to specify general criteria to develop the principle of optimisation in national

  5. First-principles study of complex material systems

    Science.gov (United States)

    He, Lixin

    This thesis covers several topics concerning the study of complex materials systems by first-principles methods. It contains four chapters. A brief, introductory motivation of this work will be given in Chapter 1. In Chapter 2, I will give a short overview of the first-principles methods, including density-functional theory (DFT), planewave pseudopotential methods, and the Berry-phase theory of polarization in crystallines insulators. I then discuss in detail the locality and exponential decay properties of Wannier functions and of related quantities such as the density matrix, and their application in linear-scaling algorithms. In Chapter 3, I investigate the interaction of oxygen vacancies and 180° domain walls in tetragonal PbTiO3 using first-principles methods. Our calculations indicate that the oxygen vacancies have a lower formation energy in the domain wall than in the bulk, thereby confirming the tendency of these defects to migrate to, and pin, the domain walls. The pinning energies are reported for each of the three possible orientations of the original Ti--O--Ti bonds, and attempts to model the results with simple continuum models are discussed. CaCu3Ti4O12 (CCTO) has attracted a lot of attention recently because it was found to have an enormous dielectric response over a very wide temperature range. In Chapter 4, I study the electronic and lattice structure, and the lattice dynamical properties, of this system. Our first-principles calculations together with experimental results point towards an extrinsic mechanism as the origin of the unusual dielectric response.

  6. Magnetic resonance imaging principles: The bare necessities

    International Nuclear Information System (INIS)

    Brant-Zawadzki, M.

    1987-01-01

    The intent of this chapter is to provide the practicing radiologist with a conceptual basis for the underlying principles of magnetic resonance imaging (MRI). The goal of what follows is to provide the reader with a conceptual framework in a relatively facile manner. By design, some repetition and certain oversimplifications will be used, and the experts might argue that accuracy is stretched by this approach. It should be pointed out, however, that even the experts still debate some of the most fundamental aspects of MRI, and that the rapid evolution of knowledge regarding MRI principles, instrumentation, and imaging techniques makes even their most ambitious treatment obsolete by the time it is printed. Nevertheless, the rapid dissemination of this technology in the clinical arena necessitates that most radiologists, need at least a conceptual model of the fundamentals with which to approach MRI, and need eventually to enrich their understanding through hands-on experience and further study. Providing a working conceptual model is the limited goal of this chapter

  7. "Drone Killings in Principle and in Practice"

    DEFF Research Database (Denmark)

    Dige, Morten

    2017-01-01

    to argue that what we see in the real world cases of drone killings is not merely an accidental or contingent use of drone technology. The real life use reflects to a large extent features that are inherent of the dominant drone systems that has been developed to date. What is being imagined "in principle......" is thus to a large extent drone killings in dreamland. I use an historic example as a point of reference and departure: the debate over the lawfulness of nuclear weapons....

  8. [Research on fast classification based on LIBS technology and principle component analyses].

    Science.gov (United States)

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  9. Radium issues at Hunters Point Annex

    International Nuclear Information System (INIS)

    Dean, S.M.

    1994-01-01

    Radium was a common source of illumination used in numerous instruments and gauges for military equipment prior to 1970. As a result of its many military applications radium 226 is now a principle radionuclide of concern at military base closures sites throughout the United States. This is an overview of the site characterization strategy employed and a potential site remediation technology being considered at a radium contaminated landfill at Hunters Point Annex, a former U.S. Navy shipyard in San Francisco, California

  10. First principles design of a core bioenergetic transmembrane electron-transfer protein

    Energy Technology Data Exchange (ETDEWEB)

    Goparaju, Geetha; Fry, Bryan A.; Chobot, Sarah E.; Wiedman, Gregory; Moser, Christopher C.; Leslie Dutton, P.; Discher, Bohdana M.

    2016-05-01

    Here we describe the design, Escherichia coli expression and characterization of a simplified, adaptable and functionally transparent single chain 4-α-helix transmembrane protein frame that binds multiple heme and light activatable porphyrins. Such man-made cofactor-binding oxidoreductases, designed from first principles with minimal reference to natural protein sequences, are known as maquettes. This design is an adaptable frame aiming to uncover core engineering principles governing bioenergetic transmembrane electron-transfer function and recapitulate protein archetypes proposed to represent the origins of photosynthesis. This article is part of a Special Issue entitled Biodesign for Bioenergetics — the design and engineering of electronic transfer cofactors, proteins and protein networks, edited by Ronald L. Koder and J.L. Ross Anderson.

  11. State transfer in highly connected networks and a quantum Babinet principle

    Science.gov (United States)

    Tsomokos, D. I.; Plenio, M. B.; de Vega, I.; Huelga, S. F.

    2008-12-01

    The transfer of a quantum state between distant nodes in two-dimensional networks is considered. The fidelity of state transfer is calculated as a function of the number of interactions in networks that are described by regular graphs. It is shown that perfect state transfer is achieved in a network of size N , whose structure is that of an (N/2) -cross polytope graph, if N is a multiple of 4 . The result is reminiscent of the Babinet principle of classical optics. A quantum Babinet principle is derived, which allows for the identification of complementary graphs leading to the same fidelity of state transfer, in analogy with complementary screens providing identical diffraction patterns.

  12. The Principle of Least Interest: Inequality in Emotional Involvement in Romantic Relationships

    Science.gov (United States)

    Sprecher, Susan; Schmeeckle, Maria; Felmlee, Diane

    2006-01-01

    Data from a longitudinal sample of dating couples (some of whom married) were analyzed to test the aspect of Waller's (1938) principle of least interest that states that unequal emotional involvement between romantic partners has implications for relationship quality and stability. Both members of the couples were asked multiple times over several…

  13. A recovery principle provides insight into auxin pattern control in the Arabidopsis root

    Science.gov (United States)

    Moore, Simon; Liu, Junli; Zhang, Xiaoxian; Lindsey, Keith

    2017-01-01

    Regulated auxin patterning provides a key mechanism for controlling root growth and development. We have developed a data-driven mechanistic model using realistic root geometry and formulated a principle to theoretically investigate quantitative auxin pattern recovery following auxin transport perturbation. This principle reveals that auxin patterning is potentially controlled by multiple combinations of interlinked levels and localisation of influx and efflux carriers. We demonstrate that (1) when efflux carriers maintain polarity but change levels, maintaining the same auxin pattern requires non-uniform and polar distribution of influx carriers; (2) the emergence of the same auxin pattern, from different levels of influx carriers with the same nonpolar localisation, requires simultaneous modulation of efflux carrier level and polarity; and (3) multiple patterns of influx and efflux carriers for maintaining an auxin pattern do not have spatially proportional correlation. This reveals that auxin pattern formation requires coordination between influx and efflux carriers. We further show that the model makes various predictions that can be experimentally validated. PMID:28220889

  14. A New Principle in Physics: the Principle 'Finiteness', and Some Consequences

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2010-01-01

    In this paper I propose a new principle in physics: the principle of 'finiteness'. It stems from the definition of physics as a science that deals (among other things) with measurable dimensional physical quantities. Since measurement results, including their errors, are always finite, the principle of finiteness postulates that the mathematical formulation of 'legitimate' laws of physics should prevent exactly zero or infinite solutions. Some consequences of the principle of finiteness are discussed, in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The consequences are derived independently of any other theory or principle in physics. I propose 'finiteness' as a postulate (like the constancy of the speed of light in vacuum, 'c'), as opposed to a notion whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories, or principles.

  15. Multiples waveform inversion

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.

  16. On the Controllability Principles and the Construction of Uncontrollability

    DEFF Research Database (Denmark)

    Friis, Ivar; Gevoll, Linn; Hansen, Allan

    A key challenge when it comes to coping with the controllability principle is to understand why and how uncontrollability occurs. In this paper we draw on the Actor-Network Theory to provide more insight to why and how uncontrollability emerges in organizations. By means of an in-depth case study...... of a wide range of heterogeneous actors that connects in multiple ways and that in combination forms a complex network of relations which affect delivery time performance. Our study adds to research on the controllability principle. Whereas prior research has characterized uncontrollability as determined...... uncontrollability more specifically is about. Thus, our study adds to as well as elaborates on prior research. We illustrate how uncontrollability arise in the intersects of several heterogeneous elements such as the product information systems, quality management, incentive systems, outsourcing and customers. We...

  17. Risk Management for Point-of-Care Testing

    OpenAIRE

    James, H. Nichols

    2014-01-01

    Point-of-care testing (POCT) is growing in popularity, and with this growth comes an increased chance of errors. Risk management is a way to reduce errors. Originally developed for the manufacturing industry, risk management principles have application for improving the quality of test results in the clinical laboratory. The Clinical and Laboratory Standards Institute (CLSI), EP23-A Laboratory Quality Control based on Risk Management guideline, introduces risk management to the clinical labor...

  18. Acceleration of cardiovascular MRI using parallel imaging: basic principles, practical considerations, clinical applications and future directions

    International Nuclear Information System (INIS)

    Niendorf, T.; Sodickson, D.

    2006-01-01

    Cardiovascular Magnetic Resonance (CVMR) imaging has proven to be of clinical value for non-invasive diagnostic imaging of cardiovascular diseases. CVMR requires rapid imaging; however, the speed of conventional MRI is fundamentally limited due to its sequential approach to image acquisition, in which data points are collected one after the other in the presence of sequentially-applied magnetic field gradients and radiofrequency coils to acquire multiple data points simultaneously, and thereby to increase imaging speed and efficiency beyond the limits of purely gradient-based approaches. The resulting improvements in imaging speed can be used in various ways, including shortening long examinations, improving spatial resolution and anatomic coverage, improving temporal resolution, enhancing image quality, overcoming physiological constraints, detecting and correcting for physiologic motion, and streamlining work flow. Examples of these strategies will be provided in this review, after some of the fundamentals of parallel imaging methods now in use for cardiovascular MRI are outlined. The emphasis will rest upon basic principles and clinical state-of-the art cardiovascular MRI applications. In addition, practical aspects such as signal-to-noise ratio considerations, tailored parallel imaging protocols and potential artifacts will be discussed, and current trends and future directions will be explored. (orig.)

  19. Multiple x-ray diffraction simulation and applications

    International Nuclear Information System (INIS)

    Costa, C.A.B.S. da.

    1989-09-01

    A computer program (MULTX) was implemented for simulation X-ray multiple diffraction diagrams in Renninger geometries. The program uses the X-ray multiple diffraction theory for imperfect crystals. The iterative calculation of the intensities is based on the Taylor series general term, and the primary beam power expansion is given as function of the beam x penetration in the crystal surface. This development allows to consider the simultaneous interaction of the beams involved in the multiple diffraction phenomenon. The simulated diagrams are calculated point-to-point and the tests for the Si and GaAs presented good reproduction of the experimental diagrams for different primary reflections. (L.C.J.A.)

  20. Point defects and atomic transport in crystals

    International Nuclear Information System (INIS)

    Lidiard, A.B.

    1981-02-01

    There are two principle aspects to the theory of atomic transport in crystals as caused by the action of point defects, namely (1) the calculation of relevant properties of the point defects (energies and other thermodynamic characteristics of the different possible defects, activation energies and other mobility parameters) and (2) the statistical mechanics of assemblies of defects, both equilibrium and non-equilibrium assemblies. In the five lectures given here both these aspects are touched on. The first two lectures are concerned with the calculation of relevant point defect properties, particularly in ionic crystals. The first lecture is more general, the second is concerned particularly with some recent calculations of the free volumes of formation of defects in various ionic solids; these solve a rather long-standing problem in this area. The remaining three lectures are concerned with the kinetic theory of defects mainly in relaxation, drift and diffusion situations

  1. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  2. Quantum Action Principle with Generalized Uncertainty Principle

    OpenAIRE

    Gu, Jie

    2013-01-01

    One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.

  3. THE PRINCIPLE OF PROTECTION AND PRESERVATION OF MARINE ENVIRONMENT, AS THE BASIS FOR ENSURING THE PROTECTION OF THE OCEANS FROM POLLUTION

    OpenAIRE

    Ksenia Borisovna Valiullina*, Damir Hamitovich Valeev

    2017-01-01

    International maritime law is a set of principles and norms, governing relations between States on the use of waters and resources of the World Ocean. Generally recognized principles of international law of the sea form the basis of international relations in the region. The latter agreements, concluded between the States, first of all, are evaluated from the point of view of their conformity with the main international legal principles. The principles not only define and specify basic rights...

  4. The Search for Underlying Principles of Health Impact Assessment: Progress and Prospects; Comment on “Investigating Underlying Principles to Guide Health Impact Assessment”

    Directory of Open Access Journals (Sweden)

    Mirko S. Winkler

    2014-07-01

    Full Text Available Health Impact Assessment (HIA is a relatively young field of endeavour, and hence, future progress will depend on the planning, implementation and rigorous evaluation of additional HIAs of projects, programmes and policies the world over. In the June 2014 issue of the International Journal of Health Policy and Management, Fakhri and colleagues investigated underlying principles of HIA through a comprehensive review of the literature and expert consultation. With an emphasis on the Islamic Republic of Iran, the authors identified multiple issues that are relevant for guiding HIA practice. At the same time, the study unravelled current shortcomings in the understanding and definition of HIA principles and best practice at national, regional, and global levels. In this commentary we scrutinise the research presented, highlight strengths and limitations, and discuss the findings in the context of other recent attempts to guide HIA.

  5. Rotating detectors and Mach's principle

    Energy Technology Data Exchange (ETDEWEB)

    Paola, R.D.M. de; Svaiter, N.F

    2000-08-01

    In this work we consider a quantum version of Newton{sup s} bucket experiment in a fl;at spacetime: we take an Unruh-DeWitt detector in interaction with a real massless scalar field. We calculate the detector's excitation rate when it is uniformly rotating around some fixed point and the field is prepared in the Minkowski vacuum and also when the detector is inertial and the field is in the Trocheries-Takeno vacuum state. These results are compared and the relations with Mach's principle are discussed. (author)

  6. Principles of developing the bench mark net for radioecological monitoring in the territory of Byelorussia

    International Nuclear Information System (INIS)

    Doroshkevich, M.N.

    1991-01-01

    The principles of developing the radioecological monitoring bench mark net proposed include the regularity pattern of reference point setting, the bench mark net hierarchy, accounting for a land-tenure structure and the landscape and geochemical conditions of the radionuclide migration as well as a passport system and informational adequacy. The expression has been obtained for determining the location of sampling points

  7. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  8. When should we recommend use of dual time-point and delayed time-point imaging techniques in FDG PET?

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Gang [Philadelphia VA Medical Center, Department of Radiology, Philadelphia, PA (United States); Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Torigian, Drew A.; Alavi, Abass [Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Zhuang, Hongming [Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States)

    2013-05-15

    FDG PET and PET/CT are now widely used in oncological imaging for tumor characterization, staging, restaging, and response evaluation. However, numerous benign etiologies may cause increased FDG uptake indistinguishable from that of malignancy. Multiple studies have shown that dual time-point imaging (DTPI) of FDG PET may be helpful in differentiating malignancy from benign processes. However, exceptions exist, and some studies have demonstrated significant overlap of FDG uptake patterns between benign and malignant lesions on delayed time-point images. In this review, we summarize our experience and opinions on the value of DTPI and delayed time-point imaging in oncology, with a review of the relevant literature. We believe that the major value of DTPI and delayed time-point imaging is the increased sensitivity due to continued clearance of background activity and continued FDG accumulation in malignant lesions, if the same diagnostic criteria (as in the initial standard single time-point imaging) are used. The specificity of DTPI and delayed time-point imaging depends on multiple factors, including the prevalence of malignancies, the patient population, and the cut-off values (either SUV or retention index) used to define a malignancy. Thus, DTPI and delayed time-point imaging would be more useful if performed for evaluation of lesions in regions with significant background activity clearance over time (such as the liver, the spleen, the mediastinum), and if used in the evaluation of the extent of tumor involvement rather than in the characterization of the nature of any specific lesion. Acute infectious and non-infectious inflammatory lesions remain as the major culprit for diminished diagnostic performance of these approaches (especially in tuberculosis-endemic regions). Tumor heterogeneity may also contribute to inconsistent performance of DTPI. The authors believe that selective use of DTPI and delayed time-point imaging will improve diagnostic accuracy and

  9. Electrical properties of improper ferroelectrics from first principles

    Science.gov (United States)

    Stengel, Massimiliano; Fennie, Craig J.; Ghosez, Philippe

    2012-09-01

    We study the interplay of structural and polar distortions in hexagonal YMnO3 and short-period PbTiO3/SrTiO3 (PTO/STO) superlattices by means of first-principles calculations at constrained electric displacement field D. We find that in YMnO3 the tilts of the oxygen polyhedra produce a robustly polar ground state, which persists at any choice of the electrical boundary conditions. Conversely, in PTO/STO the antiferrodistortive instabilities alone do not break inversion symmetry, and open-circuit boundary conditions restore a nonpolar state. We suggest that this qualitative difference naturally provides a route to rationalizing the concept of “improper ferroelectricity” from the point of view of first-principles theory. We discuss the implications of our arguments for the design of novel multiferroic materials with enhanced functionalities and for the symmetry analysis of the phase transitions.

  10. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  11. Completely boundary-free minimum and maximum principles for neutron transport and their least-squares and Galerkin equivalents

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1982-01-01

    Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)

  12. Combining different types of scale space interest points using canonical sets

    NARCIS (Netherlands)

    Kanters, F.M.W.; Denton, T.; Shokoufandeh, A.; Florack, L.M.J.; Haar Romenij, ter B.M.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    Scale space interest points capture important photometric and deep structure information of an image. The information content of such points can be made explicit using image reconstruction. In this paper we will consider the problem of combining multiple types of interest points used for image

  13. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1976-01-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented; the deductive approach appears here for the first time in the literature. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution is then re-arranged into the superposition principle. The inductive proof is simpler than Rostoker's although similar in some ways; it differs in that first-order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  14. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1975-12-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution then rearranged into the superposition principle. The inductive proof is simpler than Rostoker's, although similar in some ways; it differs in that first order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  15. The c equivalence principle and the correct form of writing Maxwell's equations

    International Nuclear Information System (INIS)

    Heras, Jose A

    2010-01-01

    It is well known that the speed c u =1/√(ε 0 μ 0 ) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c u is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c u = c, which we have called the c equivalence principle. In this paper we point out that ∇xE=-[1/(ε 0 μ 0 c 2 )]∂B/∂t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.

  16. A spatial theory for emergent multiple predator-prey interactions in food webs.

    Science.gov (United States)

    Northfield, Tobin D; Barton, Brandon T; Schmitz, Oswald J

    2017-09-01

    Predator-prey interaction is inherently spatial because animals move through landscapes to search for and consume food resources and to avoid being consumed by other species. The spatial nature of species interactions necessitates integrating spatial processes into food web theory and evaluating how predators combine to impact their prey. Here, we present a spatial modeling approach that examines emergent multiple predator effects on prey within landscapes. The modeling is inspired by the habitat domain concept derived from empirical synthesis of spatial movement and interactions studies. Because these principles are motivated by synthesis of short-term experiments, it remains uncertain whether spatial contingency principles hold in dynamical systems. We address this uncertainty by formulating dynamical systems models, guided by core habitat domain principles, to examine long-term multiple predator-prey spatial dynamics. To describe habitat domains, we use classical niche concepts describing resource utilization distributions, and assume species interactions emerge from the degree of overlap between species. The analytical results generally align with those from empirical synthesis and present a theoretical framework capable of demonstrating multiple predator effects that does not depend on the small spatial or temporal scales typical of mesocosm experiments, and help bridge between empirical experiments and long-term dynamics in natural systems.

  17. Toward the full and proper implementation of Jordan's Principle: An elusive goal to date.

    Science.gov (United States)

    Blackstock, Cindy

    2016-01-01

    First Nations children experience service delays, disruptions and denials due to jurisdictional payment disputes within and between federal and provincial/territorial governments. The House of Commons sought to ensure First Nations children could access government services on the same terms as other children when it unanimously passed a private members motion in support of Jordan's Principle in 2007. Jordan's Principle states that when a jurisdictional dispute arises regarding public services for a First Nations child that are otherwise available to other children, the government of first contact pays for the service and addresses payment disputes later. Unfortunately, the federal government adopted a definition of Jordan's Principle that was so narrow (complex medical needs with multiple service providers) that no child ever qualified. This narrow definition has been found to be unlawful by the Federal Court of Canada and the Canadian Human Rights Tribunal. The present commentary describes Jordan's Principle, the legal cases that have considered it and the implications of those decisions for health care providers.

  18. Pointright: a system to redirect mouse and keyboard control among multiple machines

    Science.gov (United States)

    Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA

    2008-09-30

    The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.

  19. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    Science.gov (United States)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed

  20. Relativistic dynamics of point magnetic moment

    Energy Technology Data Exchange (ETDEWEB)

    Rafelski, Johann; Formanek, Martin; Steinmetz, Andrew [The University of Arizona, Department of Physics, Tucson, AZ (United States)

    2018-01-15

    The covariant motion of a classical point particle with magnetic moment in the presence of (external) electromagnetic fields is revisited. We are interested in understanding extensions to the Lorentz force involving point particle magnetic moment (Stern-Gerlach force) and how the spin precession dynamics is modified for consistency. We introduce spin as a classical particle property inherent to Poincare symmetry of space-time. We propose a covariant formulation of the magnetic force based on a 'magnetic' 4-potential and show how the point particle magnetic moment relates to the Amperian (current loop) and Gilbertian (magnetic monopole) descriptions. We show that covariant spin precession lacks a unique form and discuss the connection to g - 2 anomaly. We consider the variational action principle and find that a consistent extension of the Lorentz force to include magnetic spin force is not straightforward. We look at non-covariant particle dynamics, and present a short introduction to the dynamics of (neutral) particles hit by a laser pulse of arbitrary shape. (orig.)

  1. Testing the Copernican and Cosmological Principles in the local universe with galaxy surveys

    International Nuclear Information System (INIS)

    Sylos Labini, Francesco; Baryshev, Yuri V.

    2010-01-01

    Cosmological density fields are assumed to be translational and rotational invariant, avoiding any special point or direction, thus satisfying the Copernican Principle. A spatially inhomogeneous matter distribution can be compatible with the Copernican Principle but not with the stronger version of it, the Cosmological Principle which requires the additional hypothesis of spatial homogeneity. We establish criteria for testing that a given density field, in a finite sample at low redshifts, is statistically and/or spatially homogeneous. The basic question to be considered is whether a distribution is, at different spatial scales, self-averaging. This can be achieved by studying the probability density function of conditional fluctuations. We find that galaxy structures in the SDSS samples, the largest currently available, are spatially inhomogeneous but statistically homogeneous and isotropic up to ∼ 100 Mpc/h. Evidences for the breaking of self-averaging are found up to the largest scales probed by the SDSS data. The comparison between the results obtained in volumes of different size allows us to unambiguously conclude that the lack of self-averaging is induced by finite-size effects due to long-range correlated fluctuations. We finally discuss the relevance of these results from the point of view of cosmological modeling

  2. Point defect weakened thermal contraction in monolayer graphene.

    Science.gov (United States)

    Zha, Xian-Hu; Zhang, Rui-Qin; Lin, Zijing

    2014-08-14

    We investigate the thermal expansion behaviors of monolayer graphene and three configurations of graphene with point defects, namely the replacement of one carbon atom with a boron or nitrogen atom, or of two neighboring carbon atoms by boron-nitrogen atoms, based on calculations using first-principles density functional theory. It is found that the thermal contraction of monolayer graphene is significantly decreased by point defects. Moreover, the corresponding temperature for negative linear thermal expansion coefficient with the maximum absolute value is reduced. The cause is determined to be point defects that enhance the mechanical strength of graphene and then reduce the amplitude and phonon frequency of the out-of-plane acoustic vibration mode. Such defect weakening of graphene thermal contraction will be useful in nanotechnology to diminish the mismatching or strain between the graphene and its substrate.

  3. APPLYING THE PRINCIPLES OF ACCOUNTING IN

    OpenAIRE

    NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science

    2015-01-01

    The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...

  4. Local conservation laws for principle chiral fields (d=1)

    International Nuclear Information System (INIS)

    Cherednik, I.V.

    1979-01-01

    The Beklund transformation for chiral fields in the two-dimensional Minkovski space is found. As a result an infinite series of conservation laws for principle chiral Osub(n) fields (d=1) has been built. It is shown that these laws are local, the infinite series of global invariants which do not depend on xi, eta, and which is rather rapidly decrease along xi (or along eta) solutions being connected with these laws (xi, eta - coordinates of the light cone). It is noted that with the help of the construction proposed it is possible to obtain conservation laws of principle chiral G fields, including G in the suitable ortogonal groups. Symmetry permits to exchange xi and eta. The construction of conservation laws may be carried out without supposition that lambda has a multiplicity equal to 1, however the proof of the locality applied does not transfer on the laws obtained

  5. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  6. Testing to fulfill HACCP (Hazard Analysis Critical Control Points) requirements: principles and examples.

    Science.gov (United States)

    Gardner, I A

    1997-12-01

    On-farm HACCP (hazard analysis critical control points) monitoring requires cost-effective, yet accurate and reproducible tests that can determine the status of cows, milk, and the dairy environment. Tests need to be field-validated, and their limitations need to be established so that appropriate screening strategies can be initiated and test results can be rationally interpreted. For infections and residues of low prevalence, tests or testing strategies that are highly specific help to minimize false-positive results and excessive costs to the dairy industry. The determination of the numbers of samples to be tested in HACCP monitoring programs depends on the specific purpose of the test and the likely prevalence of the agent or residue at the critical control point. The absence of positive samples from a herd test should not be interpreted as freedom from a particular agent or residue unless the entire herd has been tested with a test that is 100% sensitive. The current lack of field-validated tests for most of the chemical and infectious agents of concern makes it difficult to ensure that the stated goals of HACCP programs are consistently achieved.

  7. Cosmological principle

    International Nuclear Information System (INIS)

    Wesson, P.S.

    1979-01-01

    The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution

  8. First-principles study of point defects in solar cell semiconductor CuI

    International Nuclear Information System (INIS)

    Chen, Hui; Wang, Chong-Yu; Wang, Jian-Tao; Wu, Ying; Zhou, Shao-Xiong

    2013-01-01

    Hybrid density functional theory is used to study the formation energies and transition levels of point defects V Cu , V I , I Cu , Cu I , and O I in CuI. It is shown that the Heyd–Scuseria–Ernzerhof (HSE06) method can accurately describe the band gap of bulk CuI. As a solar cell material, we find that p-type semiconductor CuI can be obtained under the iodine-rich and copper-poor conditions. Our results are in good agreement with experiment and provide an excellent account for tuning the structural and electronic properties of CuI

  9. Accuracy & Computational Considerations for Wide--Angle One--way Seismic Propagators and Multiple Scattering by Invariant Embedding

    Science.gov (United States)

    Thomson, C. J.

    2004-12-01

    Pseudodifferential operators (PSDOs) yield in principle exact one--way seismic wave equations, which are attractive both conceptually and for their promise of computational efficiency. The one--way operators can be extended to include multiple--scattering effects, again in principle exactly. In practice approximations must be made and, as an example, the variable--wavespeed Helmholtz equation for scalar waves in two space dimensions is here factorized to give the one--way wave equation. This simple case permits clear identification of a sequence of physically reasonable approximations to be used when the mathematically exact PSDO one--way equation is implemented on a computer. As intuition suggests, these approximations hinge on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow--angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so--called ``standard--ordering'' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane--wave synthesis lying at the heart of the calculations. The decision on whether a slow or a fast Fourier transform code should be used rests upon how many lateral model parameters are truly distinct. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one--way propagator for the laterally varying case, representing the intuitive extension of classical integral--transform solutions for a laterally homogeneous medium. This exponential propagator suggests the use of larger discrete step sizes, and it can also be used to approach phase--screen like approximations (though the latter are not the main interest here). Numerical comparisons with finite--difference solutions will be presented in order to assess the approximations being made and to gain an understanding

  10. The goal of ape pointing.

    Science.gov (United States)

    Halina, Marta; Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.

  11. Possible THz gain in superlattices at a stable operation point

    DEFF Research Database (Denmark)

    Wacker, Andreas; Allen, S. J.; Scott, J. S.

    1997-01-01

    We demonstrate that semiconductor superlattices may provide gain at THz frequencies at an operation point which is stable against fluctuations at lower frequency. While an explicit experimental demonstration for the sample considered could not be achieved, the underlying principle of quantum resp...... response is quite general and may prove successful for differently designed superlattices....

  12. ‘Transnationalising’ Ne Bis In Idem: How the Rule of Ne Bis In Idem Reveals the Principle of Personal Legal Certainty

    Directory of Open Access Journals (Sweden)

    Juliette Lelieur

    2013-09-01

    Full Text Available Since Article 54 of the Convention implementing the Schengen Agreement gave the rule of ne bis in idem a transnational dimension, talk of the ‘transnational ne bis in idem principle’ has been commonplace. Thus, when looking for general principles of transnational criminal law, scholars refer to the principle of ‘transnational ne bis in idem’. It is doubtful, however, that ne bis in idem qualifies as a principle of law. It should be regarded, rather, as a rule of criminal procedure, traditionally based on the principle of res judicata. Giving the rule of ne bis in idem a transnational dimension therefore requires either transnationalising the principle of res judicata, or giving the rule of ne bis in idem a new foundation.The principle of res judicata principally serves the credibility of the justice system in a given jurisdiction by prohibiting several tribunals, all acting within the parameters of their jurisdiction, from contradicting each other’s interpretation of the same facts. For this reason, the principle of res judicata does not provide an adequate basis for a transnationalised rule of ne bis in idem.From a human rights perspective, multiple prosecutions against the same person for the same facts collides with protecting individuals against arbitrary judicial treatment. This is true whether the multiple prosecutions all take place in one country or in several different countries. The rule of ne bis in idem could therefore be regarded as a manifestation of the (new ‘principle of personal legal certainty’.

  13. Detection of uterine MMG contractions using a multiple change point estimator and the K-means cluster algorithm.

    Science.gov (United States)

    La Rosa, Patricio S; Nehorai, Arye; Eswaran, Hari; Lowery, Curtis L; Preissl, Hubert

    2008-02-01

    We propose a single channel two-stage time-segment discriminator of uterine magnetomyogram (MMG) contractions during pregnancy. We assume that the preprocessed signals are piecewise stationary having distribution in a common family with a fixed number of parameters. Therefore, at the first stage, we propose a model-based segmentation procedure, which detects multiple change-points in the parameters of a piecewise constant time-varying autoregressive model using a robust formulation of the Schwarz information criterion (SIC) and a binary search approach. In particular, we propose a test statistic that depends on the SIC, derive its asymptotic distribution, and obtain closed-form optimal detection thresholds in the sense of the Neyman-Pearson criterion; therefore, we control the probability of false alarm and maximize the probability of change-point detection in each stage of the binary search algorithm. We compute and evaluate the relative energy variation [root mean squares (RMS)] and the dominant frequency component [first order zero crossing (FOZC)] in discriminating between time segments with and without contractions. The former consistently detects a time segment with contractions. Thus, at the second stage, we apply a nonsupervised K-means cluster algorithm to classify the detected time segments using the RMS values. We apply our detection algorithm to real MMG records obtained from ten patients admitted to the hospital for contractions with gestational ages between 31 and 40 weeks. We evaluate the performance of our detection algorithm in computing the detection and false alarm rate, respectively, using as a reference the patients' feedback. We also analyze the fusion of the decision signals from all the sensors as in the parallel distributed detection approach.

  14. Detection principles of biological and chemical FET sensors.

    Science.gov (United States)

    Kaisti, Matti

    2017-12-15

    The seminal importance of detecting ions and molecules for point-of-care tests has driven the search for more sensitive, specific, and robust sensors. Electronic detection holds promise for future miniaturized in-situ applications and can be integrated into existing electronic manufacturing processes and technology. The resulting small devices will be inherently well suited for multiplexed and parallel detection. In this review, different field-effect transistor (FET) structures and detection principles are discussed, including label-free and indirect detection mechanisms. The fundamental detection principle governing every potentiometric sensor is introduced, and different state-of-the-art FET sensor structures are reviewed. This is followed by an analysis of electrolyte interfaces and their influence on sensor operation. Finally, the fundamentals of different detection mechanisms are reviewed and some detection schemes are discussed. In the conclusion, current commercial efforts are briefly considered. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Principles for statistical inference on big spatio-temporal data from climate models

    KAUST Repository

    Castruccio, Stefano; Genton, Marc G.

    2018-01-01

    The vast increase in size of modern spatio-temporal datasets has prompted statisticians working in environmental applications to develop new and efficient methodologies that are still able to achieve inference for nontrivial models within an affordable time. Climate model outputs push the limits of inference for Gaussian processes, as their size can easily be larger than 10 billion data points. Drawing from our experience in a set of previous work, we provide three principles for the statistical analysis of such large datasets that leverage recent methodological and computational advances. These principles emphasize the need of embedding distributed and parallel computing in the inferential process.

  16. Principles for statistical inference on big spatio-temporal data from climate models

    KAUST Repository

    Castruccio, Stefano

    2018-02-24

    The vast increase in size of modern spatio-temporal datasets has prompted statisticians working in environmental applications to develop new and efficient methodologies that are still able to achieve inference for nontrivial models within an affordable time. Climate model outputs push the limits of inference for Gaussian processes, as their size can easily be larger than 10 billion data points. Drawing from our experience in a set of previous work, we provide three principles for the statistical analysis of such large datasets that leverage recent methodological and computational advances. These principles emphasize the need of embedding distributed and parallel computing in the inferential process.

  17. The effect of salutogenic treatment principles on coping with mental health problems A randomised controlled trial.

    Science.gov (United States)

    Langeland, Eva; Riise, Trond; Hanestad, Berit R; Nortvedt, Monica W; Kristoffersen, Kjell; Wahl, Astrid K

    2006-08-01

    Although the theory of salutogenesis provides generic understanding of how coping may be created, this theoretical perspective has not been explored sufficiently within research among people suffering from mental health problems. The aim of this study is to investigate the effect of talk-therapy groups based on salutogenic treatment principles on coping with mental health problems. In an experimental design, the participants (residents in the community) were randomly allocated to a coping-enhancing experimental group (n=59) and a control group (n=47) receiving standard care. Coping was measured using the sense of coherence (SOC) questionnaire. Coping improved significantly in the experiment group (+6 points) compared with the control group (-2 points). The manageability component contributed most to this improvement. Talk-therapy groups based on salutogenic treatment principles improve coping among people with mental health problems. Talk-therapy groups based on salutogenic treatment principles may be helpful in increasing coping in the recovery process among people with mental health problems and seem to be applicable to people with various mental health problems.

  18. Multiple-choice test of energy and momentum concepts

    OpenAIRE

    Singh, Chandralekha; Rosengrant, David

    2016-01-01

    We investigate student understanding of energy and momentum concepts at the level of introductory physics by designing and administering a 25-item multiple choice test and conducting individual interviews. We find that most students have difficulty in qualitatively interpreting basic principles related to energy and momentum and in applying them in physical situations.

  19. Myth 6: Cosmetic Use of Multiple Selection Criteria

    Science.gov (United States)

    Friedman-Nimz, Reva

    2009-01-01

    Twenty-five years ago, armed with the courage of her convictions and a respectable collection of empirical evidence, the author articulated what she considered to be a compelling argument against the cosmetic use of multiple selection criteria as a guiding principle for identifying children and youth with high potential. To assess the current…

  20. Zero-Point Energy Constraint for Unimolecular Dissociation Reactions. Giving Trajectories Multiple Chances To Dissociate Correctly.

    Science.gov (United States)

    Paul, Amit K; Hase, William L

    2016-01-28

    A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.

  1. Parallel Beam-Beam Simulation Incorporating Multiple Bunches and Multiple Interaction Regions

    CERN Document Server

    Jones, F W; Pieloni, T

    2007-01-01

    The simulation code COMBI has been developed to enable the study of coherent beam-beam effects in the full collision scenario of the LHC, with multiple bunches interacting at multiple crossing points over many turns. The program structure and input are conceived in a general way which allows arbitrary numbers and placements of bunches and interaction points (IP's), together with procedural options for head-on and parasitic collisions (in the strong-strong sense), beam transport, statistics gathering, harmonic analysis, and periodic output of simulation data. The scale of this problem, once we go beyond the simplest case of a pair of bunches interacting once per turn, quickly escalates into the parallel computing arena, and herein we will describe the construction of an MPI-based version of COMBI able to utilize arbitrary numbers of processors to support efficient calculation of multi-bunch multi-IP interactions and transport. Implementing the parallel version did not require extensive disruption of the basic ...

  2. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  3. Quantum violation of the pigeonhole principle and the nature of quantum correlations.

    Science.gov (United States)

    Aharonov, Yakir; Colombo, Fabrizio; Popescu, Sandu; Sabadini, Irene; Struppa, Daniele C; Tollaksen, Jeff

    2016-01-19

    The pigeonhole principle: "If you put three pigeons in two pigeonholes, at least two of the pigeons end up in the same hole," is an obvious yet fundamental principle of nature as it captures the very essence of counting. Here however we show that in quantum mechanics this is not true! We find instances when three quantum particles are put in two boxes, yet no two particles are in the same box. Furthermore, we show that the above "quantum pigeonhole principle" is only one of a host of related quantum effects, and points to a very interesting structure of quantum mechanics that was hitherto unnoticed. Our results shed new light on the very notions of separability and correlations in quantum mechanics and on the nature of interactions. It also presents a new role for entanglement, complementary to the usual one. Finally, interferometric experiments that illustrate our effects are proposed.

  4. High-Order Hamilton's Principle and the Hamilton's Principle of High-Order Lagrangian Function

    International Nuclear Information System (INIS)

    Zhao Hongxia; Ma Shanjun

    2008-01-01

    In this paper, based on the theorem of the high-order velocity energy, integration and variation principle, the high-order Hamilton's principle of general holonomic systems is given. Then, three-order Lagrangian equations and four-order Lagrangian equations are obtained from the high-order Hamilton's principle. Finally, the Hamilton's principle of high-order Lagrangian function is given.

  5. Applying Lean principles and Kaizen rapid improvement events in public health practice.

    Science.gov (United States)

    Smith, Gene; Poteat-Godwin, Annah; Harrison, Lisa Macon; Randolph, Greg D

    2012-01-01

    This case study describes a local home health and hospice agency's effort to implement Lean principles and Kaizen methodology as a rapid improvement approach to quality improvement. The agency created a cross-functional team, followed Lean Kaizen methodology, and made significant improvements in scheduling time for home health nurses that resulted in reduced operational costs, improved working conditions, and multiple organizational efficiencies.

  6. Principles for fostering the transdisciplinary development of assistive technologies.

    Science.gov (United States)

    Boger, Jennifer; Jackson, Piper; Mulvenna, Maurice; Sixsmith, Judith; Sixsmith, Andrew; Mihailidis, Alex; Kontos, Pia; Miller Polgar, Janice; Grigorovich, Alisa; Martin, Suzanne

    2017-07-01

    Developing useful and usable assistive technologies often presents complex (or "wicked") challenges that require input from multiple disciplines and sectors. Transdisciplinary collaboration can enable holistic understanding of challenges that may lead to innovative, impactful and transformative solutions. This paper presents generalised principles that are intended to foster transdisciplinary assistive technology development. The paper introduces the area of assistive technology design before discussing general aspects of transdisciplinary collaboration followed by an overview of relevant concepts, including approaches, methodologies and frameworks for conducting and evaluating transdisciplinary working and assistive technology design. The principles for transdisciplinary development of assistive technologies are presented and applied post hoc to the COACH project, an ambient-assisted living technology for guiding completion of activities of daily living by older adults with dementia as an illustrative example. Future work includes the refinement and validation of these principles through their application to real-world transdisciplinary assistive technology projects. Implications for rehabilitation Transdisciplinarity encourages a focus on real world 'wicked' problems. A transdisciplinary approach involves transcending disciplinary boundaries and collaborating with interprofessional and community partners (including the technology's intended users) on a shared problem. Transdisciplinarity fosters new ways of thinking about and doing research, development, and implementation, expanding the scope, applicability, and commercial viability of assistive technologies.

  7. Roadside Multiple Objects Extraction from Mobile Laser Scanning Point Cloud Based on DBN

    Directory of Open Access Journals (Sweden)

    LUO Haifeng

    2018-02-01

    Full Text Available This paper proposed an novel algorithm for exploring deep belief network (DBN architectures to extract and recognize roadside facilities (trees,cars and traffic poles from mobile laser scanning (MLS point cloud.The proposed methods firstly partitioned the raw MLS point cloud into blocks and then removed the ground and building points.In order to partition the off-ground objects into individual objects,off-ground points were organized into an Octree structure and clustered into candidate objects based on connected component.To improve segmentation performance on clusters containing overlapped objects,a refining processing using a voxel-based normalized cut was then implemented.In addition,multi-view features descriptor was generated for each independent roadside facilities based on binary images.Finally,a deep belief network (DBN was trained to extract trees,cars and traffic pole objects.Experiments are undertaken to evaluate the validities of the proposed method with two datasets acquired by Lynx Mobile Mapper System.The precision of trees,cars and traffic poles objects extraction results respectively was 97.31%,97.79% and 92.78%.The recall was 98.30%,98.75% and 96.77% respectively.The quality is 95.70%,93.81% and 90.00%.And the F1 measure was 97.80%,96.81% and 94.73%.

  8. General principles of passive radar signature reducing – stealth technology and its applications

    Directory of Open Access Journals (Sweden)

    Alexandru Marius PANAIT

    2010-03-01

    Full Text Available The paper presents passive radar signature reducing principles and technologies and discusses the ways to implement stealthy characteristics in general vehicle design. Stealth is a major requirement to all current-generation military vehicle designs and also a strong selling point for various aircraft and UAVs.

  9. Properties of gases, liquids, and solutions principles and methods

    CERN Document Server

    Mason, Warren P

    2013-01-01

    Physical Acoustics: Principles and Methods, Volume ll-Part A: Properties of Gases, Liquids, and Solutions ponders on high frequency sound waves in gases, liquids, and solids that have been proven as effective tools in examining the molecular, domain wall, and other types of motions. The selection first offers information on the transmission of sound waves in gases at very low pressures and the phenomenological theory of the relaxation phenomena in gases. Topics include free molecule propagation, phenomenological thermodynamics of irreversible processes, and simultaneous multiple relaxation pro

  10. A point-wise fiber Bragg grating displacement sensing system and its application for active vibration suppression of a smart cantilever beam subjected to multiple impact loadings

    International Nuclear Information System (INIS)

    Chuang, Kuo-Chih; Ma, Chien-Ching; Liao, Heng-Tseng

    2012-01-01

    In this work, active vibration suppression of a smart cantilever beam subjected to disturbances from multiple impact loadings is investigated with a point-wise fiber Bragg grating (FBG) displacement sensing system. An FBG demodulator is employed in the proposed fiber sensing system to dynamically demodulate the responses obtained by the FBG displacement sensor with high sensitivity. To investigate the ability of the proposed FBG displacement sensor as a feedback sensor, velocity feedback control and delay control are employed to suppress the vibrations of the first three bending modes of the smart cantilever beam. To improve the control performance for the first bending mode when the cantilever beam is subjected to an impact loading, we improve the conventional velocity feedback controller by tuning the control gain online with the aid of information from a higher vibration mode. Finally, active control of vibrations induced by multiple impact loadings due to a plastic ball is performed with the improved velocity feedback control. The experimental results show that active vibration control of smart structures subjected to disturbances such as impact loadings can be achieved by employing the proposed FBG sensing system to feed back out-of-plane point-wise displacement responses with high sensitivity. (paper)

  11. Exploring point-cloud features from partial body views for gender classification

    Science.gov (United States)

    Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga

    2012-06-01

    In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further

  12. Interval Mathematics Applied to Critical Point Transitions

    Directory of Open Access Journals (Sweden)

    Benito A. Stradi

    2012-03-01

    Full Text Available The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. This study uses an interval Newton/Generalized Bisection algorithm that provides a mathematical and computational guarantee that all mixture critical points are located. The technique is illustrated using several example problems. These problems involve cubic equation of state models; however, the technique is general purpose and can be applied in connection with other nonlinear problems.

  13. Principle Study of Head Meridian Acupoint Massage to Stress Release via Grey Data Model Analysis.

    Science.gov (United States)

    Lee, Ya-Ting

    2016-01-01

    This paper presents the scientific study of the effectiveness and action principle of head meridian acupoint massage by applying the grey data model analysis approach. First, the head massage procedure for massaging the important head meridian acupuncture points including Taiyang, Fengfu, Tianzhu, Fengqi, and Jianjing is formulated in a standard manner. Second, the status of the autonomic nervous system of each subject is evaluated by using the heart rate variability analyzer before and after the head massage following four weeks. Afterward, the physiological factors of autonomic nerves are quantitatively analyzed by using the grey data modeling theory. The grey data analysis can point out that the status of autonomic nervous system is greatly improved after the massage. The order change of the grey relationship weighting of physiological factors shows the action principle of the sympathetic and parasympathetic nerves when performing head massage. In other words, the grey data model is able to distinguish the detailed interaction of the autonomic nervous system and the head meridian acupoint massage. Thus, the stress relaxing effect of massaging head meridian acupoints is proved, which is lacked in literature. The results can be a reference principle for massage health care in practice.

  14. Development of rheometer for semi-solid highmelting point alloys

    Directory of Open Access Journals (Sweden)

    LIU Wen

    2005-11-01

    Full Text Available A rheometer for semi-solid high-melting point alloys was developed based on the principle of a double-bucket rheometer, with which the solidifying of semi-solid high-melting point alloy melt could be effectively controlled by the control of temperature and the outer force-field; and different microstructures have also been obtained. This rheometer can be used to investigate the rheological behavior under different conditions by changing the Theological parameters. By way of full-duplex communication between the computer and each sensor, automatic control of the test equipment and real- timemeasurement of rheological parameters were realized. Finally, the influencing factors on torque are also quantitatively analyzed.

  15. Multiple relationships: does the new ethics code answer the right questions?

    Science.gov (United States)

    Sonne, J L

    1994-11-01

    The new "Ethical Principles of Psychologists and Code of Conduct" (American Psychological Association, 1992) presented expanded attempts to clarify the ethical issues regarding multiple relationships and to provide useful guidance for psychologists. This article proposes that the new code fails to address adequately two basic questions necessary to provide psychologists with clear guidance: (a) What are multiple relationships? and (b) When do multiple relationships constitute unethical conduct? The article offers a definition of multiple relationships and identifies several dynamics operating within a professional relationship that are likely to be adversely affected by the imposition of a secondary relationship. Unethical multiple relationships are defined. Finally, the article suggests additions to the new code that would enhance its utility for psychologists.

  16. The Tipping Points of Technology Development

    Directory of Open Access Journals (Sweden)

    Tauno Kekäle

    2014-07-01

    Full Text Available The tipping point, the decisive point in time in the competition between old and new, is an interesting phenomenon in physics of today. This aspect in technology acceptance is connected to many business decisions such as technology investments, product releases, resource allocation, sales forecasts and, ultimately, affects the profitability and even survival of a company. The tipping point itself is based on many stochastic and dynamic variables, and the process may at least partly be described as path-dependent. This paper analyses the tipping point from three aspects: (1 product performance, (2 features of the market and infrastructure (including related technologies and human network externalities, and (3 actions of the incumbents (including customer lock-in, systems lock-in, and sustaining innovation. The paper is based on the Bass s-curve idea and the technology trajectory concept proposed by Dosi. Three illustrative cases are presented to make the point of the multiple factors affecting technology acceptance and, thus, the tipping point. The paper also suggests outlines for further research in field of computer simulation.

  17. Effective intermolecular potential and critical point for C60 molecule

    Science.gov (United States)

    Ramos, J. Eloy

    2017-07-01

    The approximate nonconformal (ANC) theory is applied to the C60 molecule. A new binary potential function is developed for C60, which has three parameters only and is obtained by averaging the site-site carbon interactions on the surface of two C60 molecules. It is shown that the C60 molecule follows, to a good approximation, the corresponding states principle with n-C8H18, n-C4F10 and n-C5F12. The critical point of C60 is estimated in two ways: first by applying the corresponding states principle under the framework of the ANC theory, and then by using previous computer simulations. The critical parameters obtained by applying the corresponding states principle, although very different from those reported in the literature, are consistent with the previous results of the ANC theory. It is shown that the Girifalco potential does not correspond to an average of the site-site carbon-carbon interaction.

  18. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  19. Principles and Limitations of Ultra-Wideband FM Communications Systems

    Directory of Open Access Journals (Sweden)

    Kouwenhoven Michiel HL

    2005-01-01

    Full Text Available This paper presents a novel UWB communications system using double FM: a low-modulation index digital FSK followed by a high-modulation index analog FM to create a constant-envelope UWB signal. FDMA techniques at the subcarrier level are exploited to accommodate multiple users. The system is intended for low (1–10 kbps and medium (100–1000 kbps bit rate, and short-range WPAN systems. A wideband delay-line FM demodulator that is not preceded by any limiting amplifier constitutes the key component of the UWBFM receiver. This unusual approach permits multiple users to share the same RF bandwidth. Multipath, however, may limit the useful subcarrier bandwidth to one octave. This paper addresses the performance with AWGN and multipath, the resistance to narrowband interference, as well as the simultaneous detection of multiple FM signals at the same carrier frequency. SPICE and Matlab simulation results illustrate the principles and limitations of this new technology. A hardware demonstrator has been realized and has allowed the confirmation of theory with practical results.

  20. Core principles of assessment in competency-based medical education.

    Science.gov (United States)

    Lockyer, Jocelyn; Carraccio, Carol; Chan, Ming-Ka; Hart, Danielle; Smee, Sydney; Touchie, Claire; Holmboe, Eric S; Frank, Jason R

    2017-06-01

    The meaningful assessment of competence is critical for the implementation of effective competency-based medical education (CBME). Timely ongoing assessments are needed along with comprehensive periodic reviews to ensure that trainees continue to progress. New approaches are needed to optimize the use of multiple assessors and assessments; to synthesize the data collected from multiple assessors and multiple types of assessments; to develop faculty competence in assessment; and to ensure that relationships between the givers and receivers of feedback are appropriate. This paper describes the core principles of assessment for learning and assessment of learning. It addresses several ways to ensure the effectiveness of assessment programs, including using the right combination of assessment methods and conducting careful assessor selection and training. It provides a reconceptualization of the role of psychometrics and articulates the importance of a group process in determining trainees' progress. In addition, it notes that, to reach its potential as a driver in trainee development, quality care, and patient safety, CBME requires effective information management and documentation as well as ongoing consideration of ways to improve the assessment system.

  1. Beyond the principles of bioethics: facing the consequences of fundamental moral disagreement

    Directory of Open Access Journals (Sweden)

    H. Tristram Engelhardt

    2012-08-01

    Full Text Available http://dx.doi.org/10.5007/1677-2954.2012v11n1p13   Given intractable secular moral pluralism, the force and significance of the four principles (autonomy, beneficence, non-maleficence, and justice of Tom Beauchamp and James Childress must be critically re-considered. This essay examines the history of the articulation of these four principles of bioethics, showing why initially there was an illusion of a common morality that led many to hold that the principles could give guidance across cultures. But there is no one sense of the content or the theoretical justification of these principles. In addition, a wide range of secular moral and bioethical choices has been demoralized into lifestyle choices; the force of the secular moral point of view has also been deflated, thus compounding moral pluralism. It is the political generation of the principles that provides a common morality in the sense of an established morality. The principles are best understood as embedded not in a common morality, sensu stricto, but in that morality that is established at law and public policy in a particular polity. Although moral pluralism is substantive and intractable at the level of moral content, in a particular polity a particular morality and a particular bioethics can be established, regarding which health care ethics consultants can be experts. Public morality and bioethics are at their roots a political reality.

  2. A Variation on Uncertainty Principle and Logarithmic Uncertainty Principle for Continuous Quaternion Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Mawardi Bahri

    2017-01-01

    Full Text Available The continuous quaternion wavelet transform (CQWT is a generalization of the classical continuous wavelet transform within the context of quaternion algebra. First of all, we show that the directional quaternion Fourier transform (QFT uncertainty principle can be obtained using the component-wise QFT uncertainty principle. Based on this method, the directional QFT uncertainty principle using representation of polar coordinate form is easily derived. We derive a variation on uncertainty principle related to the QFT. We state that the CQWT of a quaternion function can be written in terms of the QFT and obtain a variation on uncertainty principle related to the CQWT. Finally, we apply the extended uncertainty principles and properties of the CQWT to establish logarithmic uncertainty principles related to generalized transform.

  3. Biomedical engineering principles

    CERN Document Server

    Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N

    2011-01-01

    Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem

  4. Revisiting the dilatation operator of the Wilson-Fisher fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Liendo, Pedro [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2017-01-15

    We revisit the order ε dilatation operator of the Wilson-Fisher fixed point obtained by Kehrein, Pismak, and Wegner in light of recent results in conformal field theory. Our approach is algebraic and based only on symmetry principles. The starting point of our analysis is that the first correction to the dilatation operator is a conformal invariant, which implies that its form is fixed up to an infinite set of coefficients associated with the scaling dimensions of higher-spin currents. These coefficients can be fixed using well-known perturbative results, however, they were recently re-obtained using CFT arguments without relying on perturbation theory. Our analysis then implies that all order-ε scaling dimensions of the Wilson-Fisher fixed point can be fixed by symmetry.

  5. A Numerical Study on the Excitation of Guided Waves in Rectangular Plates Using Multiple Point Sources

    Directory of Open Access Journals (Sweden)

    Wenbo Duan

    2017-12-01

    Full Text Available Ultrasonic guided waves are widely used to inspect and monitor the structural integrity of plates and plate-like structures, such as ship hulls and large storage-tank floors. Recently, ultrasonic guided waves have also been used to remove ice and fouling from ship hulls, wind-turbine blades and aeroplane wings. In these applications, the strength of the sound source must be high for scanning a large area, or to break the bond between ice, fouling and plate substrate. More than one transducer may be used to achieve maximum sound power output. However, multiple sources can interact with each other, and form a sound field in the structure with local constructive and destructive regions. Destructive regions are weak regions and shall be avoided. When multiple transducers are used it is important that they are arranged in a particular way so that the desired wave modes can be excited to cover the whole structure. The objective of this paper is to provide a theoretical basis for generating particular wave mode patterns in finite-width rectangular plates whose length is assumed to be infinitely long with respect to its width and thickness. The wave modes have displacements in both width and thickness directions, and are thus different from the classical Lamb-type wave modes. A two-dimensional semi-analytical finite element (SAFE method was used to study dispersion characteristics and mode shapes in the plate up to ultrasonic frequencies. The modal analysis provided information on the generation of modes suitable for a particular application. The number of point sources and direction of loading for the excitation of a few representative modes was investigated. Based on the SAFE analysis, a standard finite element modelling package, Abaqus, was used to excite the designed modes in a three-dimensional plate. The generated wave patterns in Abaqus were then compared with mode shapes predicted in the SAFE model. Good agreement was observed between the

  6. The genetic difference principle.

    Science.gov (United States)

    Farrelly, Colin

    2004-01-01

    In the newly emerging debates about genetics and justice three distinct principles have begun to emerge concerning what the distributive aim of genetic interventions should be. These principles are: genetic equality, a genetic decent minimum, and the genetic difference principle. In this paper, I examine the rationale of each of these principles and argue that genetic equality and a genetic decent minimum are ill-equipped to tackle what I call the currency problem and the problem of weight. The genetic difference principle is the most promising of the three principles and I develop this principle so that it takes seriously the concerns of just health care and distributive justice in general. Given the strains on public funds for other important social programmes, the costs of pursuing genetic interventions and the nature of genetic interventions, I conclude that a more lax interpretation of the genetic difference principle is appropriate. This interpretation stipulates that genetic inequalities should be arranged so that they are to the greatest reasonable benefit of the least advantaged. Such a proposal is consistent with prioritarianism and provides some practical guidance for non-ideal societies--that is, societies that do not have the endless amount of resources needed to satisfy every requirement of justice.

  7. Multiple trauma in children: critical care overview.

    Science.gov (United States)

    Wetzel, Randall C; Burns, R Cartland

    2002-11-01

    Multiple trauma is more than the sum of the injuries. Management not only of the physiologic injury but also of the pathophysiologic responses, along with integration of the child's emotional and developmental needs and the child's family, forms the basis of trauma care. Multiple trauma in children also elicits profound psychological responses from the healthcare providers involved with these children. This overview will address the pathophysiology of multiple trauma in children and the general principles of trauma management by an integrated trauma team. Trauma is a systemic disease. Multiple trauma stimulates the release of multiple inflammatory mediators. A lethal triad of hypothermia, acidosis, and coagulopathy is the direct result of trauma and secondary injury from the systemic response to trauma. Controlling and responding to the secondary pathophysiologic sequelae of trauma is the cornerstone of trauma management in the multiply injured, critically ill child. Damage control surgery is a new, rational approach to the child with multiple trauma. The selection of children for damage control surgery depends on the severity of injury. Major abdominal vascular injuries and multiple visceral injuries are best considered for this approach. The effective management of childhood multiple trauma requires a combined team approach, consideration of the child and family, an organized trauma system, and an effective quality assurance and improvement mechanism.

  8. Variational principles for collective motion: Relation between invariance principle of the Schroedinger equation and the trace variational principle

    International Nuclear Information System (INIS)

    Klein, A.; Tanabe, K.

    1984-01-01

    The invariance principle of the Schroedinger equation provides a basis for theories of collective motion with the help of the time-dependent variational principle. It is formulated here with maximum generality, requiring only the motion of intrinsic state in the collective space. Special cases arise when the trial vector is a generalized coherent state and when it is a uniform superposition of collective eigenstates. The latter example yields variational principles uncovered previously only within the framework of the equations of motion method. (orig.)

  9. Reducing Uncertainty: Implementation of Heisenberg Principle to Measure Company Performance

    Directory of Open Access Journals (Sweden)

    Anna Svirina

    2015-08-01

    Full Text Available The paper addresses the problem of uncertainty reduction in estimation of future company performance, which is a result of wide range of enterprise's intangible assets probable efficiency. To reduce this problem, the paper suggests to use quantum economy principles, i.e. implementation of Heisenberg principle to measure efficiency and potential of intangible assets of the company. It is proposed that for intangibles it is not possible to estimate both potential and efficiency at a certain time point. To provide a proof for these thesis, the data on resources potential and efficiency from mid-Russian companies was evaluated within deterministic approach, which did not allow to evaluate probability of achieving certain resource efficiency, and quantum approach, which allowed to estimate the central point around which the probable efficiency of resources in concentrated. Visualization of these approaches was performed by means of LabView software. It was proven that for tangible assets performance estimation a deterministic approach should be used; while for intangible assets the quantum approach allows better quality of future performance prediction. On the basis of these findings we proposed the holistic approach towards estimation of company resource efficiency in order to reduce uncertainty in modeling company performance.

  10. Focal points and principal solutions of linear Hamiltonian systems revisited

    Science.gov (United States)

    Šepitka, Peter; Šimon Hilscher, Roman

    2018-05-01

    In this paper we present a novel view on the principal (and antiprincipal) solutions of linear Hamiltonian systems, as well as on the focal points of their conjoined bases. We present a new and unified theory of principal (and antiprincipal) solutions at a finite point and at infinity, and apply it to obtain new representation of the multiplicities of right and left proper focal points of conjoined bases. We show that these multiplicities can be characterized by the abnormality of the system in a neighborhood of the given point and by the rank of the associated T-matrix from the theory of principal (and antiprincipal) solutions. We also derive some additional important results concerning the representation of T-matrices and associated normalized conjoined bases. The results in this paper are new even for completely controllable linear Hamiltonian systems. We also discuss other potential applications of our main results, in particular in the singular Sturmian theory.

  11. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  12. Rehabilitation and multiple sclerosis

    DEFF Research Database (Denmark)

    Dalgas, Ulrik

    2011-01-01

    In a chronic and disabling disease like multiple sclerosis, rehabilitation becomes of major importance in the preservation of physical, psychological and social functioning. Approximately 80% of patients have multiple sclerosis for more than 35 years and most will develop disability at some point......, a paradigm shift is taking place and it is now increasingly acknowledged that exercise therapy is both safe and beneficial. Robot-assisted training is also attracting attention in multiple sclerosis rehabilitation. Several sophisticated commercial robots exist, but so far the number of scientific studies...... promising. This drug has been shown to improve walking ability in some patients with multiple sclerosis, associated with a reduction of patients' self-reported ambulatory disability. Rehabilitation strategies involving these different approaches, or combinations of them, may be of great use in improving...

  13. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    Science.gov (United States)

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  14. Implications of the Babinet Principle for Casimir interactions

    International Nuclear Information System (INIS)

    Maghrebi, Mohammad F.; Jaffe, Robert L.; Abravanel, Ronen

    2011-01-01

    We formulate the Babinet Principle (BP) as a relation between scattering amplitudes and combine it with multiple scattering techniques to derive new properties of electromagnetic Casimir forces. We show that the Casimir force exerted by a planar conductor or dielectric on a self-complementary perforated planar mirror is approximately half that on a uniform mirror independent of the distance between them. Also, the BP suggests that Casimir edge effects are generically anomalously small. Furthermore, the BP can be used to relate any planar object to its complementary geometry, a relation we use to estimate Casimir forces between two screens with apertures.

  15. Implications of the Babinet Principle for Casimir interactions

    Science.gov (United States)

    Maghrebi, Mohammad F.; Jaffe, Robert L.; Abravanel, Ronen

    2011-09-01

    We formulate the Babinet Principle (BP) as a relation between scattering amplitudes and combine it with multiple scattering techniques to derive new properties of electromagnetic Casimir forces. We show that the Casimir force exerted by a planar conductor or dielectric on a self-complementary perforated planar mirror is approximately half that on a uniform mirror independent of the distance between them. Also, the BP suggests that Casimir edge effects are generically anomalously small. Furthermore, the BP can be used to relate any planar object to its complementary geometry, a relation we use to estimate Casimir forces between two screens with apertures.

  16. Principles of interaction of ionizing radiation with matter and basic radiation chemistry processes

    International Nuclear Information System (INIS)

    Santar, I.; Bednar, J.

    1976-01-01

    The basic principles are given of the interaction of ionizing radiation with matter and the main trends are pointed out in radiation chemistry. A brief characteristics is given of the basic radiation chemical processes in gases and in the condensed phase, namely in water and in organic substances. (B.S.)

  17. Zero Point Energy and the Dirac Equation

    OpenAIRE

    Forouzbakhsh, Farshid

    2007-01-01

    Zero Point Energy (ZPE) describes the random electromagnetic oscillations that are left in the vacuum after all other energy has been removed. One way to explain this is by means of the uncertainty principle of quantum physics, which implies that it is impossible to have a zero energy condition.I this article, the ZPE is explained by using a novel description of the graviton. This is based on the behavior of photons in gravitational field, leading to a new definition of the graviton. In effec...

  18. INTEGRATED ASSESSMENT OF BUILDINGS QUALITY IN THE CONTEXT OF SUSTAINABLE DEVELOPMENT PRINCIPLES

    Directory of Open Access Journals (Sweden)

    Mária Kozlovská

    2014-12-01

    Full Text Available Purpose: The aim of the paper is to analyse the assumptions for integrated assessment of buildings quality in the context of sustainable development principles. The sustainable (or “green” buildings are cost effective, environmentally friendly and conserving natural resources. The buildings are comfortable for the users, are also healthy and optimally integrated into socio-cultural environment; thereby have long maintained their high added value – for investors, owners as well as users.Design methodology/approach: The methodology of the paper consists in analyses of certification systems that assess buildings sustainability within wider environmental, economic and social relations. An effort to increase the quality of construction and to provide objectified assessment with measurable and comparable results has evoked the origin and development of the tools for buildings sustainability assessment. In the case study, there are analysed the approaches into assessment of one from few certified sustainable projects in Slovakia “EcoPoint Office Center Kosice”. The results are destined for potential investors perhaps even for present owners that have ambitions and responsibility for building sustainability principles performance when designing and using their properties.Findings: The results of the research imply identification of the key characteristics expressing the comprehensive quality of the building and are leading to specification of practical and social implications that are provided by the sustainability philosophy.Originality/value: The force of the paper is to mention the approaches into integrated assessment of construction quality in the context of sustainability principles and the importance of their more extensive implementation in Slovakia. The approaches into the sustainability principles performance as well as the real benefits of the sustainable building are declared through case study of the building EcoPoint Office

  19. Core principles of evolutionary medicine

    Science.gov (United States)

    Grunspan, Daniel Z; Nesse, Randolph M; Barnes, M Elizabeth; Brownell, Sara E

    2018-01-01

    Abstract Background and objectives Evolutionary medicine is a rapidly growing field that uses the principles of evolutionary biology to better understand, prevent and treat disease, and that uses studies of disease to advance basic knowledge in evolutionary biology. Over-arching principles of evolutionary medicine have been described in publications, but our study is the first to systematically elicit core principles from a diverse panel of experts in evolutionary medicine. These principles should be useful to advance recent recommendations made by The Association of American Medical Colleges and the Howard Hughes Medical Institute to make evolutionary thinking a core competency for pre-medical education. Methodology The Delphi method was used to elicit and validate a list of core principles for evolutionary medicine. The study included four surveys administered in sequence to 56 expert panelists. The initial open-ended survey created a list of possible core principles; the three subsequent surveys winnowed the list and assessed the accuracy and importance of each principle. Results Fourteen core principles elicited at least 80% of the panelists to agree or strongly agree that they were important core principles for evolutionary medicine. These principles over-lapped with concepts discussed in other articles discussing key concepts in evolutionary medicine. Conclusions and implications This set of core principles will be helpful for researchers and instructors in evolutionary medicine. We recommend that evolutionary medicine instructors use the list of core principles to construct learning goals. Evolutionary medicine is a young field, so this list of core principles will likely change as the field develops further. PMID:29493660

  20. Thermodynamic modeling of the Ca-Sn system based on finite temperature quantities from first-principles and experiment

    International Nuclear Information System (INIS)

    Ohno, M.; Kozlov, A.; Arroyave, R.; Liu, Z.K.; Schmid-Fetzer, R.

    2006-01-01

    The thermodynamic model of the Ca-Sn system was obtained, utilizing the first-principles total energies and heat capacities calculated from 0 K to the melting points of the major phases. Since the first-principles result for the formation energy of the dominating Ca 2 Sn intermetallic phase is drastically different from the reported experimental data, we performed two types of thermodynamic modeling: one based on the first-principles output and the other based on the experimental data. In the former modeling, the Gibbs energies of the intermetallic compounds were fully quantified from the first-principles finite temperature properties and the superiority of the former thermodynamic description is demonstrated. It is shown that it is the combination of finite temperature first-principle calculations and the Calphad modeling tool that provides a sound basis for identifying and deciding on conflicting key thermodynamic data in the Ca-Sn system

  1. NARMER-1: a photon point-kernel code with build-up factors

    Science.gov (United States)

    Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence

    2017-09-01

    This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.

  2. Formulating Fermat's principle for light traveling in negative refraction materials

    International Nuclear Information System (INIS)

    Veselago, Viktor G

    2002-01-01

    The formulation of Fermat's principle for electromagnetic waves traveling in materials with a negative refractive index is refined. It is shown that a formulation in terms of the minimum (or extremum) of wave travel time between two points is not correct in general. The correct formulation involves the extremum of the total optical length, with the optical length for the wave propagation through left-handed materials taken to be negative. (methodological notes)

  3. The certainty principle (review)

    OpenAIRE

    Arbatsky, D. A.

    2006-01-01

    The certainty principle (2005) allowed to conceptualize from the more fundamental grounds both the Heisenberg uncertainty principle (1927) and the Mandelshtam-Tamm relation (1945). In this review I give detailed explanation and discussion of the certainty principle, oriented to all physicists, both theorists and experimenters.

  4. Fusion research principles

    CERN Document Server

    Dolan, Thomas James

    2013-01-01

    Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics

  5. A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction.

    Science.gov (United States)

    Qiu, Shibin; Lane, Terran

    2009-01-01

    The cell defense mechanism of RNA interference has applications in gene function analysis and promising potentials in human disease therapy. To effectively silence a target gene, it is desirable to select appropriate initiator siRNA molecules having satisfactory silencing capabilities. Computational prediction for silencing efficacy of siRNAs can assist this screening process before using them in biological experiments. String kernel functions, which operate directly on the string objects representing siRNAs and target mRNAs, have been applied to support vector regression for the prediction and improved accuracy over numerical kernels in multidimensional vector spaces constructed from descriptors of siRNA design rules. To fully utilize information provided by string and numerical data, we propose to unify the two in a kernel feature space by devising a multiple kernel regression framework where a linear combination of the kernels is used. We formulate the multiple kernel learning into a quadratically constrained quadratic programming (QCQP) problem, which although yields global optimal solution, is computationally demanding and requires a commercial solver package. We further propose three heuristics based on the principle of kernel-target alignment and predictive accuracy. Empirical results demonstrate that multiple kernel regression can improve accuracy, decrease model complexity by reducing the number of support vectors, and speed up computational performance dramatically. In addition, multiple kernel regression evaluates the importance of constituent kernels, which for the siRNA efficacy prediction problem, compares the relative significance of the design rules. Finally, we give insights into the multiple kernel regression mechanism and point out possible extensions.

  6. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  7. INTERNATIONAL LABOUR LAW PRINCIPLES AS GUIDELINES TO FOSTEREMPLOYMENT RELATIONS

    Directory of Open Access Journals (Sweden)

    Aniko Noemi TURI

    2017-06-01

    Full Text Available Contemporary human resource management practices often ignore very important values of international labour law; however there is a wide floor for improvements in this area. In this sense the main guidelines are arising from the legal acts of the International organizations. The social responsibility, professional ethics and management are categories which have the intense relation with the legal system. Some historically developed degree of social responsibility and professional ethics may be considered as an important resource of values which are the starting point for building the legal system and also international regulations. The international labour law principles are significant elements in employment relations. The paper represents how the principles of the international labour law can positively influence managerial strategies through the social dialogue. Social dialogue provides a communication platform between social partners and by that it is actually creating a socio-economic and social development. Furthermore social dialogue is a key instrument in planning social development, harmonizing different interests, prevent and resolve disputes between the management and labour. International law shows many ways how to strengthen the principle of ethics in the employment relations. The values, arising from the existing international legal documents may be the significant guideline for the development of “good practices of managers”.

  8. The principles of Newtonian and quantum mechanics the need for Planck's constant, h

    CERN Document Server

    De Gosson, Maurice A

    2001-01-01

    This book deals with the foundations of classical physics from the "symplectic" point of view, and of quantum mechanics from the "metaplectic" point of view. The Bohmian interpretation of quantum mechanics is discussed. Phase space quantization is achieved using the "principle of the symplectic camel", which is a recently discovered deep topological property of Hamiltonian flows. The mathematical tools developed in this book are the theory of the metaplectic group, the Maslov index in a precise form, and the Leray index of a pair of Lagrangian planes. The concept of the "metatron" is introduce

  9. FDM and DMT performance comparison in high capacity point-to-point fibre links for intra/inter-datacentre connections

    Science.gov (United States)

    Gatto, A.; Parolari, P.; Boffi, P.

    2018-05-01

    Frequency division multiplexing (FDM) is attractive to achieve high capacities in multiple access networks characterized by direct modulation and direct detection. In this paper we take into account point-to-point intra- and inter-datacenter connections to understand the performance of FDM operation compared with the ones achievable with standard multiple carrier modulation approach based on discrete multitone (DMT). DMT and FDM allow to match the non-uniform and bandwidth-limited response of the system under test, associated with the employment of low-cost directly-modulated sources, such as VCSELs with high-frequency chirp, and with fibre-propagation in presence of chromatic dispersion. While for very short distances typical of intra-datacentre communications, the huge number of DMT subcarriers permits to increase the transported capacity with respect to the FDM employment, in case of few tens-km reaches typical of inter-datacentre connections, the capabilities of FDM are more evident, providing system performance similar to the case of DMT application.

  10. Stability analysis of the Gyroscopic Power Take-Off wave energy point absorber

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Zhang, Zili; Kramer, Morten Mejlhede

    2015-01-01

    The Gyroscopic Power Take-Off (GyroPTO) wave energy point absorber consists of a float rigidly connected to a lever. The operational principle is somewhat similar to that of the so-called gyroscopic hand wrist exercisers, where the rotation of the float is brought forward by the rotational particle...

  11. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  12. Utilization of lean management principles in the ambulatory clinic setting.

    Science.gov (United States)

    Casey, Jessica T; Brinton, Thomas S; Gonzalez, Chris M

    2009-03-01

    The principles of 'lean management' have permeated many sectors of today's business world, secondary to the success of the Toyota Production System. This management method enables workers to eliminate mistakes, reduce delays, lower costs, and improve the overall quality of the product or service they deliver. These lean management principles can be applied to health care. Their implementation within the ambulatory care setting is predicated on the continuous identification and elimination of waste within the process. The key concepts of flow time, inventory and throughput are utilized to improve the flow of patients through the clinic, and to identify points that slow this process -- so-called bottlenecks. Nonessential activities are shifted away from bottlenecks (i.e. the physician), and extra work capacity is generated from existing resources, rather than being added. The additional work capacity facilitates a more efficient response to variability, which in turn results in cost savings, more time for the physician to interact with patients, and faster completion of patient visits. Finally, application of the lean management principle of 'just-in-time' management can eliminate excess clinic inventory, better synchronize office supply with patient demand, and reduce costs.

  13. Multiple Perspectives on Organizing: projects between tyranny and perforation

    DEFF Research Database (Denmark)

    Koch, Christian; Bendixen, Mads

    2005-01-01

    engineering organizations are the need for multiple organizing principles and management to bridge contradictory and competing concerns for skill development, resource alignment and innovation efforts; and a strong need to focus on soft management practices such as mediation, brokering and coaching....... its projects and the spatial/community dimension. This enables an understanding of the multiple, often contrasting, organizing dynamics in the organization as well as diverse interests and groups found within this type of organization. It finds that it is not only the clients of the company who rule...

  14. Fermat's principle of least time predicts refraction of ant trails at substrate borders.

    Directory of Open Access Journals (Sweden)

    Jan Oettler

    Full Text Available Fermat's principle of least time states that light rays passing through different media follow the fastest (and not the most direct path between two points, leading to refraction at medium borders. Humans intuitively employ this rule, e.g., when a lifeguard has to infer the fastest way to traverse both beach and water to reach a swimmer in need. Here, we tested whether foraging ants also follow Fermat's principle when forced to travel on two surfaces that differentially affected the ants' walking speed. Workers of the little fire ant, Wasmannia auropunctata, established "refracted" pheromone trails to a food source. These trails deviated from the most direct path, but were not different to paths predicted by Fermat's principle. Our results demonstrate a new aspect of decentralized optimization and underline the versatility of the simple yet robust rules governing the self-organization of group-living animals.

  15. Fermat's principle of least time predicts refraction of ant trails at substrate borders.

    Science.gov (United States)

    Oettler, Jan; Schmid, Volker S; Zankl, Niko; Rey, Olivier; Dress, Andreas; Heinze, Jürgen

    2013-01-01

    Fermat's principle of least time states that light rays passing through different media follow the fastest (and not the most direct) path between two points, leading to refraction at medium borders. Humans intuitively employ this rule, e.g., when a lifeguard has to infer the fastest way to traverse both beach and water to reach a swimmer in need. Here, we tested whether foraging ants also follow Fermat's principle when forced to travel on two surfaces that differentially affected the ants' walking speed. Workers of the little fire ant, Wasmannia auropunctata, established "refracted" pheromone trails to a food source. These trails deviated from the most direct path, but were not different to paths predicted by Fermat's principle. Our results demonstrate a new aspect of decentralized optimization and underline the versatility of the simple yet robust rules governing the self-organization of group-living animals.

  16. Radiation protection principles

    International Nuclear Information System (INIS)

    Ismail Bahari

    2007-01-01

    The presentation outlines the aspects of radiation protection principles. It discussed the following subjects; radiation hazards and risk, the objectives of radiation protection, three principles of the system - justification of practice, optimization of protection and safety, dose limit

  17. The principle(s) of co-existence in Europe: Social, economic and legal avenues

    NARCIS (Netherlands)

    Purnhagen, K.; Wesseler, J.H.H.

    2015-01-01

    The European policy of coexistence follows a number of well-established social, economic and legal principles. Applying these principles in practice has resulted in a complex “rag rug” of coexistence policies in Europe. This rag rug makes enforcement of these principles difficult, at times even

  18. Effective communication at the point of multiple sclerosis diagnosis.

    Science.gov (United States)

    Solari, Alessandra

    2014-04-01

    As a consequence of the current shortened diagnostic workup, people with multiple sclerosis (PwMS) are rapidly confronted with a disease of uncertain prognosis that requires complex treatment decisions. This paper reviews studies that have assessed the experiences of PwMS in the peri-diagnostic period and have evaluated the efficacy of interventions providing information at this critical moment. The studies found that the emotional burden on PwMS at diagnosis was high, and emphasised the need for careful monitoring and management of mood symptoms (chiefly anxiety). Information provision did not affect anxiety symptoms but improved patients' knowledge of their condition, the achievement of 'informed choice', and satisfaction with the diagnosis communication. It is vital to develop and implement information and decision aids for PwMS, but this is resource intensive, and international collaboration may be a way forward. The use of patient self-assessed outcome measures that appraise the quality of diagnosis communication is also important to allow health services to understand and meet the needs and preferences of PwMS.

  19. A broad-band (0.2-8 MHz) multiple-harmonic VITROVAC-filled acceleration structure

    Energy Technology Data Exchange (ETDEWEB)

    Ausset, P.; Charruau, G.; De Menezes, D.; Fougeron, C. [Laboratoire National Saturne, Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France); Etzkorn, F.J.; Papureanu, S.; Schnase, A.; Meuth, H. [Forschungszentrum Juelich GmbH (Germany)

    1994-12-31

    Higher or multiple-harmonic acceleration drives in synchrotrons are desirable, when passing the transition point, applying stochastic cooling on a bunched beam, or for many other longitudinal beam manipulations, as bunch stretching or compression. As proof-of-principle, virtually arbitrary, digitally synthesized voltage waveforms, employing contents up to fourth harmonic in the range 0.2-8 MHz, could be generated at the gap of one single (symmetric re-entrant) cavity, filled with discs of the novel ferritic amorphous metal VITROVAC of VAC, Hanau. A 10 kW amplifier produces voltages in the kV-range. As relevant examples, we achieved a flat-top waveform suitable for the transition (+27 deg, 10{sup -3} max. error), a fourth-order flattened bucket for bunched-beam cooling, and a harmonic bucket with linear restoring force. The compact cavity system should be well suited for any proton or heavy ion device operating in this frequency range, and therapy-oriented rings. (author). 9 refs., 6 figs.

  20. A broad-band (0.2-8 MHz) multiple-harmonic VITROVAC-filled acceleration structure

    International Nuclear Information System (INIS)

    Ausset, P.; Charruau, G.; De Menezes, D.; Fougeron, C.; Etzkorn, F.J.; Papureanu, S.; Schnase, A.; Meuth, H.

    1994-01-01

    Higher or multiple-harmonic acceleration drives in synchrotrons are desirable, when passing the transition point, applying stochastic cooling on a bunched beam, or for many other longitudinal beam manipulations, as bunch stretching or compression. As proof-of-principle, virtually arbitrary, digitally synthesized voltage waveforms, employing contents up to fourth harmonic in the range 0.2-8 MHz, could be generated at the gap of one single (symmetric re-entrant) cavity, filled with discs of the novel ferritic amorphous metal VITROVAC of VAC, Hanau. A 10 kW amplifier produces voltages in the kV-range. As relevant examples, we achieved a flat-top waveform suitable for the transition (+27 deg, 10 -3 max. error), a fourth-order flattened bucket for bunched-beam cooling, and a harmonic bucket with linear restoring force. The compact cavity system should be well suited for any proton or heavy ion device operating in this frequency range, and therapy-oriented rings. (author). 9 refs., 6 figs

  1. [Bioethics of principles].

    Science.gov (United States)

    Pérez-Soba Díez del Corral, Juan José

    2008-01-01

    Bioethics emerges about the tecnological problems of acting in human life. Emerges also the problem of the moral limits determination, because they seem exterior of this practice. The Bioethics of Principles, take his rationality of the teleological thinking, and the autonomism. These divergence manifest the epistemological fragility and the great difficulty of hmoralñ thinking. This is evident in the determination of autonomy's principle, it has not the ethical content of Kant's propose. We need a new ethic rationality with a new refelxion of new Principles whose emerges of the basic ethic experiences.

  2. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  3. In-vitro diagnostic devices introduction to current point-of-care diagnostic devices

    CERN Document Server

    Cheng, Chao-Min; Chen, Chien-Fu

    2016-01-01

    Addressing the origin, current status, and future development of point-of-care diagnostics, and serving to integrate knowledge and tools from Analytical Chemistry, Bioengineering, Biomaterials, and Nanotechnology, this book focusses on addressing the collective and combined needs of industry and academia (including medical schools) to effectively conduct interdisciplinary research. In addition to summarizing and detailing developed diagnostic devices, this book will attempt to point out the possible future trends of development for point-of-care diagnostics using both scientifically based research and practical engineering needs with the aim to help novices comprehensively understand the development of point-of-care diagnostics. This includes demonstrating several common but critical principles and mechanisms used in point-of-care diagnostics that address practical needs (e.g., disease or healthcare monitoring) using two well-developed examples so far: 1) blood glucose meters (via electrochemistry); and, 2) p...

  4. 32 CFR 776.19 - Principles.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Principles. 776.19 Section 776.19 National... Professional Conduct § 776.19 Principles. The Rules of this subpart are based on the following principles... exists, this subpart should be interpreted consistent with these general principles. (a) Covered...

  5. The Principle of General Tovariance

    Science.gov (United States)

    Heunen, C.; Landsman, N. P.; Spitters, B.

    2008-06-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.

  6. Extremum principles for irreversible processes

    International Nuclear Information System (INIS)

    Hillert, M.; Agren, J.

    2006-01-01

    Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case

  7. Study on the intrinsic defects in tin oxide with first-principles method

    Science.gov (United States)

    Sun, Yu; Liu, Tingyu; Chang, Qiuxiang; Ma, Changmin

    2018-04-01

    First-principles and thermodynamic methods are used to study the contribution of vibrational entropy to defect formation energy and the stability of the intrinsic point defects in SnO2 crystal. According to thermodynamic calculation results, the contribution of vibrational entropy to defect formation energy is significant and should not be neglected, especially at high temperatures. The calculated results indicate that the oxygen vacancy is the major point defect in undoped SnO2 crystal, which has a higher concentration than that of the other point defect. The property of negative-U is put forward in SnO2 crystal. In order to determine the most stable defects much clearer under different conditions, the most stable intrinsic defect as a function of Fermi level, oxygen partial pressure and temperature are described in the three-dimensional defect formation enthalpy diagrams. The diagram visually provides the most stable point defects under different conditions.

  8. On surface clustering and Pauli principle effects in alpha decay

    International Nuclear Information System (INIS)

    Holan, S.

    1983-01-01

    The importance of the correct description of nuclear surface region in alpha decay calculations is pointed out. A model is proposed takinq into account explicitly surface clustering and Pauli principle effects which are essential in this region. A method for solving the main integrodifferential equation of the model by using the oscillator shell basis and the Collatz method is worked out. The first numerical results are obtained for nonlocal potential of the atpha particle-daughter nucleus interaction

  9. 36 CFR 219.2 - Principles.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 2 2010-07-01 2010-07-01 false Principles. 219.2 Section 219... Forest System Land and Resource Management Planning Purpose and Principles § 219.2 Principles. The planning regulations in this subpart are based on the following principles: (a) The first priority for...

  10. Multiple comparisons in drug efficacy studies: scientific or marketing principles?

    Science.gov (United States)

    Leo, Jonathan

    2004-01-01

    When researchers design an experiment to compare a given medication to another medication, a behavioral therapy, or a placebo, the experiment often involves numerous comparisons. For instance, there may be several different evaluation methods, raters, and time points. Although scientifically justified, such comparisons can be abused in the interests of drug marketing. This article provides two recent examples of such questionable practices. The first involves the case of the arthritis drug celecoxib (Celebrex), where the study lasted 12 months but the authors only presented 6 months of data. The second case involves the NIMH Multimodal Treatment Study (MTA) study evaluating the efficacy of stimulant medication for attention-deficit hyperactivity disorder where ratings made by several groups are reported in contradictory fashion. The MTA authors have not clarified the confusion, at least in print, suggesting that the actual findings of the study may have played little role in the authors' reported conclusions.

  11. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...

  12. Reducing waste and errors: piloting lean principles at Intermountain Healthcare.

    Science.gov (United States)

    Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K

    2005-05-01

    The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.

  13. Multiple group membership and well-being

    DEFF Research Database (Denmark)

    Sønderlund, Anders L.; Morton, Thomas A.; Ryan, Michelle K.

    2017-01-01

    multiple group membership and well-being, but only for individuals high in SIC. This effect was mediated by perceived identity expression and access to social support. Study 2 (N = 104) also found that multiple group memberships indirectly contributed to well-being via perceived identity expression......A growing body of research points to the value of multiple group memberships for individual well-being. However, much of this work considers group memberships very broadly and in terms of number alone. We conducted two correlational studies exploring how the relationship between multiple group...... and social support, as well as identity compatibility and perceived social inclusion. But, in this study the relationship between multiple group memberships and well-being outcomes was moderated by the perceived value and visibility of group memberships to others. Specifically, possessing multiple, devalued...

  14. The effect of nurses' use of the principles of learning organization on organizational effectiveness.

    Science.gov (United States)

    Jeong, Seok Hee; Lee, Taewha; Kim, In Sook; Lee, Myung Ha; Kim, Mi Ja

    2007-04-01

    This paper is a report of a study to describe the effect on organizational effectiveness of nurses' use of the principles of learning organization. Since Senge proposed the learning organization model in 1990, the principles of learning organization have been considered as a new organizational vision. However, there is little empirical evidence that shows how nurses' use of the principles of learning organization affects organizational effectiveness in healthcare settings. A cross-sectional survey was used and the data were collected in 2003. Participants were 629 professional nurses who had worked full-time for more than 1 year in the general units of nine tertiary medical hospitals in South Korea. A questionnaire was distributed to nurse managers of nine hospitals, who distributed it to 665 nurses, 635 of whom responded (response rate 95.5%). Six returns were discarded due to incomplete responses, leaving 629 for data analysis. There was a statistically significant positive relationship between nurses' use of the principles of learning organization and organizational effectiveness. Hierarchical multiple regression analysis revealed that the concept explained an additional 24.9% of organizational commitment and a further 22.6% of job satisfaction. The learning organization principles of shared vision and team learning were statistically significant predictors for organizational effectiveness. Individual nurses can use the principles of learning organization to enhance organizational effectiveness. Intervention programmes that integrate and strengthen shared vision and team learning may be useful to enhance organizational effectiveness. Further research is required to identify other factors related to the principles of learning organization.

  15. MINIMALISM IN A PSYCHOLINGUISTIC POINT OF VIEW: BINDING PRINCIPLES AND ITS OPERATION IN ON-LINE PROCESSING OF COREFERENCE

    Directory of Open Access Journals (Sweden)

    José Ferrari Neto

    2014-12-01

    Full Text Available This article aims to evaluate how much a formal model of Grammar can be apply to on-line mental processes that underlying the sentential processing. For this intent, it was carried on an experiment in which it was observed how the Binding Principles act in the processing of correferential relations in Brazilian Portuguese (BP. The results suggest that there is a convergence between linguistic computation and theories about linguistic processing.

  16. PRINCIPLES OF THE EUROPEAN ADMINISTRATIVE SPACE AND THEIR IMPACT ON PERFORMANCE IN PUBLIC ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Alexandra Ema Cioclea

    2012-09-01

    Full Text Available The European Union is interested in ensuring that each national administration offers comparable administrative capacity through quality of public services and professionalism from the civil servants. At the same time, the European states are characterised by long and varied institutional histories, with different trajectories in their evolution. That is why, public administration structures and regulations vary among the Member States and a set of common principles can guide them towards administrative convergence and performance. This paper aims to analyze the shared principles of a common European Administrative Space and also to address the link between these principles and the performance of public institutions from a managerial point of view. The study is based on review and analysis of academic research, government documents and personal perspectives, extracting and linking key findings from existing research and practice. The paper argues that managerial theories on performance are compatible with public administration organizations and some of the criteria are common to those promoted by the principles of the European Administrative Space.

  17. Mass Customization of Teaching and Training in Organizations: Design Principles and Prototype Evaluation

    Science.gov (United States)

    Nistor, Nicolae; Dehne, Anina; Drews, Frank Thomas

    2010-01-01

    In search of methods that improve the efficiency of teaching and training in organizations, several authors point out that mass customization (MC) is a principle that covers individual needs of knowledge and skills and, at the same time limits the development costs of customized training to those of mass training. MC is proven and established in…

  18. Enhancing the discussion of alternatives in EIA using principle component analysis leads to improved public involvement

    International Nuclear Information System (INIS)

    Kamijo, Tetsuya; Huang, Guangwei

    2017-01-01

    The purpose of this study is to show the effectiveness of principle component analysis (PCA) as a method of alternatives analysis useful for improving the discussion of alternatives and public involvement. This study examined public consultations by applying quantitative text analysis (QTA) to the minutes of meetings and showed a positive correlation between the discussion of alternatives and the sense of public involvement. The discussion of alternatives may improve public involvement. A table of multiple criteria analysis for alternatives with detailed scores may exclude the public from involvement due to the general public's limited capacity to understand the mathematical algorithm and to process too much information. PCA allowed for the reduction of multiple criteria down to a small number of uncorrelated variables (principle components), a display of the merits and demerits of the alternatives, and potentially made the identification of preferable alternatives by the stakeholders easier. PCA is likely to enhance the discussion of alternatives and as a result, lead to improved public involvement.

  19. Search for the QCD critical point at SPS energies

    CERN Document Server

    Anticic, T.; Barna, D.; Bartke, J.; Betev, L.; Bialkowska, H.; Blume, C.; Boimska, B.; Botje, M.; Bracinik, J.; Buncic, P.; Cerny, V.; Christakoglou, P.; Chung, P.; Chvala, O.; Cramer, J.G.; Csato, P.; Dinkelaker, P.; Eckardt, V.; Fodor, Z.; Foka, P.; Friese, V.; Gal, J.; Gazdzicki, M.; Genchev, V.; Gladysz, E.; Grebieszkow, K.; Hegyi, S.; Hohne, C.; Kadija, K.; Karev, A.; Kikola, D.; Kolesnikov, V.I.; Kornas, E.; Korus, R.; Kowalski, M.; Kreps, M.; Laszlo, A.; Lacey, R.; van Leeuwen, M.; Levai, P.; Litov, L.; Lungwitz, B.; Makariev, M.; Malakhov, A.I.; Mateev, M.; Melkumov, G.L.; Mischke, A.; Mitrovski, M.; Mrowczynski, St.; Palla, G.; Panagiotou, A.D.; Petridis, A.; Peryt, W.; Pikna, M.; Pluta, J.; Prindle, D.; Puhlhofer, F.; Renfordt, R.; Roland, C.; Roland, G.; Rybczynski, M.; Rybicki, A.; Sandoval, A.; Schmitz, N.; Schuster, T.; Seyboth, P.; Sikler, F.; Sitar, B.; Skrzypczak, E.; Slodkowski, M.; Stefanek, G.; Stock, R.; Strabel, C.; Strobele, H.; Susa, T.; Szentpetery, I.; Sziklai, J.; Szuba, M.; Szymanski, P.; Trubnikov, V.; Utvic, M.; Varga, D.; Vassiliou, M.; Veres, G.I.; Vesztergombi, G.; Vranic, D.; Wlodarczyk, Z.; Wojtaszek-Szwarc, A.; Yoo, I.K.; Abgrall, N.; Aduszkiewicz, A.; Andrieu, B.; Anticic, T.; Antoniou, N.; Argyriades, J.; Asryan, A.G.; Blondel, A.; Blumer, J.; Boldizsar, L.; Bravar, A.; Brzychczyk, J.; Bubak, A.; Bunyatov, S.A.; Choi, K.-U.; Chung, P.; Cleymans, J.; Derkach, D.A.; Diakonos, F.; Dominik, W.; Dumarchez, J.; Engel, R.; Ereditato, A.; Feofilov, G.A.; Ferrero, A.; Gazdzicki, M.; Golubeva, M.; Grzeszczuk, A.; Guber, F.; Hasegawa, T.; Haungs, A.; Igolkin, S.; Ivanov, A.S.; Ivashkin, A.; Katrynska, N.; Kielczewska, D.; Kisiel, J.; Kobayashi, T.; Kolev, D.; Kolevatov, R.S.; Kondratiev, V.P.; Kowalski, S.; Kurepin, A.; Lacey, R.; Lyubushkin, V.V.; Majka, Z.; Marchionni, A.; Marcinek, A.; Maris, I.; Matveev, V.; Meregaglia, A.; Messina, M.; Mijakowski, P.; Montaruli, T.; Murphy, S.; Nakadaira, T.; Naumenko, P.A.; Nikolic, V.; Nishikawa, K.; Palczewski, T.; Planeta, R.; Popov, B.A.; Posiadala, M.; Przewlocki, P.; Rauch, W.; Ravonel, M.; Rohrich, D.; Rondio, E.; Rossi, B.; Roth, M.; Rubbia, A.; Sadovsky, A.; Sakashita, K.; Sekiguchi, T.; Seyboth, P.; Shibata, M.; Sissakian, A.N.; Sorin, A.S.; Staszel, P.; Stepaniak, J.; Strabel, C.; Stroebele, H.; Tada, M.; Taranenko, A.; Tsenov, R.; Ulrich, R.; Unger, M.; Vechernin, V.V.; Zipper, W.

    2009-01-01

    Lattice QCD calculations locate the QCD critical point at energies accessible at the CERN Super Proton Synchrotron (SPS). We present average transverse momentum and multiplicity fluctuations, as well as baryon and anti-baryon transverse mass spectra which are expected to be sensitive to effects of the critical point. The future CP search strategy of the NA61/SHINE experiment at the SPS is also discussed.

  20. The balance principle in scientific research.

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Wang, Qi

    2012-05-01

    The principles of balance, randomization, control and repetition, which are closely related, constitute the four principles of scientific research. The balance principle is the kernel of the four principles which runs through the other three. However, in scientific research, the balance principle is always overlooked. If the balance principle is not well performed, the research conclusion is easy to be denied, which may lead to the failure of the whole research. Therefore, it is essential to have a good command of the balance principle in scientific research. This article stresses the definition and function of the balance principle, the strategies and detailed measures to improve balance in scientific research, and the analysis of the common mistakes involving the use of the balance principle in scientific research.

  1. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  2. The inconstant "principle of constancy".

    Science.gov (United States)

    Kanzer, M

    1983-01-01

    A review of the principle of constancy, as it appeared in Freud's writings, shows that it was inspired by his clinical observations, first with Breuer in the field of cathartic therapy and then through experiences in the early usage of psychoanalysis. The recognition that memories repressed in the unconscious created increasing tension, and that this was relieved with dischargelike phenomena when the unconscious was made conscious, was the basis for his claim to originality in this area. The two principles of "neuronic inertia" Freud expounded in the Project (1895), are found to offer the key to the ambiguous definition of the principle of constancy he was to offer in later years. The "original" principle, which sought the complete discharge of energy (or elimination of stimuli), became the forerunner of the death drive; the "extended" principle achieved balances that were relatively constant, but succumbed in the end to complete discharge. This was the predecessor of the life drives. The relation between the constancy and pleasure-unpleasure principles was maintained for twenty-five years largely on an empirical basis which invoked the concept of psychophysical parallelism between "quantity" and "quality." As the links between the two principles were weakened by clinical experiences attendant upon the growth of ego psychology, a revision of the principle of constancy was suggested, and it was renamed the Nirvana principle. Actually it was shifted from alignment with the "extended" principle of inertia to the original, so that "constancy" was incongruously identified with self-extinction. The former basis for the constancy principle, the extended principle of inertia, became identified with Eros. Only a few commentators seem aware of this radical transformation, which has been overlooked in the Standard Edition of Freud's writings. Physiological biases in the history and conception of the principle of constancy are noted in the Standard Edition. The historical

  3. Basic principles of aggressive rehabilitation after anterior cruciate ligament reconstruction

    Directory of Open Access Journals (Sweden)

    Dubljanin-Raspopović Emilija

    2005-01-01

    Full Text Available Rehabilitation after ACL (anterior cruciate ligament reconstruction has drastically changed over the last decade, with the adoption of a more aggressive approach, right from the first day after surgery. Progress in the effectiveness of rehabilitation is based on improvements in operative techniques, as well as on the encouraging results of histological studies regarding graft healing. Despite a huge amount of research papers on this topic, a rehabilitation golden standard still has not been established, due to the complexity of this problem. In this review, we point out the basic principles of rehabilitation after arthroscopically assisted ACL reconstruction based on actual practices, as well as the importance of specific procedures for the prevention of complications during the postoperative period. The importance of range-of-motion exercises, early weight bearing, an appropriate gait scheme, patella mobilisation, pain and oedema control, as well as stretching and balance exercises is explained. The functional advantages of closed kinetic chain exercises, as well as their influence on the graft are also described, in comparison to open kinetic chain exercises. The fundamentals of returning to sports are revealed and the specific aspects of rehabilitation regarding graft choice are pointed out. While waiting for new clinical investigations, which are expected to enable the establishment of a rehabilitation golden standard, the outlined principles should be followed. The complexity of this injury requires treatment in highly specialised institutions.

  4. Principles of control for decoherence-free subsystems.

    Science.gov (United States)

    Cappellaro, P; Hodges, J S; Havel, T F; Cory, D G

    2006-07-28

    Decoherence-free subsystems (DFSs) are a powerful means of protecting quantum information against noise with known symmetry properties. Although Hamiltonians that can implement a universal set of logic gates on DFS encoded qubits without ever leaving the protected subsystem theoretically exist, the natural Hamiltonians that are available in specific implementations do not necessarily have this property. Here we describe some of the principles that can be used in such cases to operate on encoded qubits without losing the protection offered by the DFSs. In particular, we show how dynamical decoupling can be used to control decoherence during the unavoidable excursions outside of the DFS. By means of cumulant expansions, we show how the fidelity of quantum gates implemented by this method on a simple two physical qubit DFS depends on the correlation time of the noise responsible for decoherence. We further show by means of numerical simulations how our previously introduced "strongly modulating pulses" for NMR quantum information processing can permit high-fidelity operations on multiple DFS encoded qubits in practice, provided that the rate at which the system can be modulated is fast compared to the correlation time of the noise. The principles thereby illustrated are expected to be broadly applicable to many implementations of quantum information processors based on DFS encoded qubits.

  5. Elliott wave principle and the corresponding fractional Brownian motion in stock markets: Evidence from Nikkei 225 index

    International Nuclear Information System (INIS)

    Ilalan, Deniz

    2016-01-01

    Highlights: • Hausdorff dimension of the Elliott Wave trajectories is computed. • Linkage between Elliott Wave principle and fractional Brownian motion is proposed. • Log-normality of stock returns is discussed from a fractal point of view. - Abstract: This paper examines one of the vital technical analysis indicators known as the Elliott wave principle. Since these waves have a fractal nature with patterns that are not exact, we first determine the dimension of them. Our second aim is to find a linkage between Elliott wave principle and fractional Brownian motion via comparing their Hausdorff dimensions. Thirdly, we consider the Nikkei 225 index during Japan asset price bubble, which is a perfect example of an Elliott wave.

  6. Predicting catalysis: Understanding ammonia synthesis from first-principles calculations

    DEFF Research Database (Denmark)

    Hellmann, A.; Baerends, E.J.; Biczysko, M.

    2006-01-01

    . Furthermore, our studies provide new insight into several related fields, for instance, gas-phase and electrochemical ammonia synthesis. The success of predicting the outcome of a catalytic reaction from first-principles calculations supports our point of view that, in the future, theory will be a fully......Here, we give a full account of a large collaborative effort toward an atomic-scale understanding of modern industrial ammonia production over ruthenium catalysts. We show that overall rates of ammonia production can be determined by applying various levels of theory (including transition state...... for any given point along an industrial reactor, and the kinetic results can be integrated over the catalyst bed to determine the industrial reactor yield. We find that, given the present uncertainties, the rate of ammonia production is well-determined directly from our atomic-scale calculations...

  7. Analysis of Operating Principles with S-system Models

    Science.gov (United States)

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  8. SUMMIT (Serially Unified Multicenter Multiple Sclerosis Investigation): creating a repository of deeply phenotyped contemporary multiple sclerosis cohorts.

    Science.gov (United States)

    Bove, Riley; Chitnis, Tanuja; Cree, Bruce Ac; Tintoré, Mar; Naegelin, Yvonne; Uitdehaag, Bernard Mj; Kappos, Ludwig; Khoury, Samia J; Montalban, Xavier; Hauser, Stephen L; Weiner, Howard L

    2017-08-01

    There is a pressing need for robust longitudinal cohort studies in the modern treatment era of multiple sclerosis. Build a multiple sclerosis (MS) cohort repository to capture the variability of disability accumulation, as well as provide the depth of characterization (clinical, radiologic, genetic, biospecimens) required to adequately model and ultimately predict a patient's course. Serially Unified Multicenter Multiple Sclerosis Investigation (SUMMIT) is an international multi-center, prospectively enrolled cohort with over a decade of comprehensive follow-up on more than 1000 patients from two large North American academic MS Centers (Brigham and Women's Hospital (Comprehensive Longitudinal Investigation of Multiple Sclerosis at the Brigham and Women's Hospital (CLIMB; BWH)) and University of California, San Francisco (Expression/genomics, Proteomics, Imaging, and Clinical (EPIC))). It is bringing online more than 2500 patients from additional international MS Centers (Basel (Universitätsspital Basel (UHB)), VU University Medical Center MS Center Amsterdam (MSCA), Multiple Sclerosis Center of Catalonia-Vall d'Hebron Hospital (Barcelona clinically isolated syndrome (CIS) cohort), and American University of Beirut Medical Center (AUBMC-Multiple Sclerosis Interdisciplinary Research (AMIR)). We provide evidence for harmonization of two of the initial cohorts in terms of the characterization of demographics, disease, and treatment-related variables; demonstrate several proof-of-principle analyses examining genetic and radiologic predictors of disease progression; and discuss the steps involved in expanding SUMMIT into a repository accessible to the broader scientific community.

  9. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  10. Collapsing of multigroup cross sections in optimization problems solved by means of the maximum principle of Pontryagin

    International Nuclear Information System (INIS)

    Anton, V.

    1979-05-01

    A new formulation of multigroup cross section collapsing based on the conservation of point or zone value of hamiltonian is presented. This attempt is proper to optimization problems solved by means of maximum principle of Pontryagin. (author)

  11. Applying the principles of thermoeconomics to the organic Rankine Cycle for low temperature waste heat recovery

    International Nuclear Information System (INIS)

    Xiao, F.; Lilun, Q.; Changsun, S.

    1989-01-01

    In this paper, thermoeconomic principle is used to study the selection of working fluids and the option of the cycle parameters in the organic Rankine cycle of low temperature waste heat recovery. The parameter ξ, the product of the ratio of waste heat recovery and real cycle thermal efficiency, is suggested as a unified thermodynamic criterion for the selection of the working fluids. The mathematical expressions are developed to determine the optimal boiling temperature and the optimal pin point temperature difference in the heat recovery exchanger by way of thermoeconomic principle

  12. Principle study on the signal connection at transabdominal fetal pulse oximetry

    Directory of Open Access Journals (Sweden)

    Böttrich Marcel

    2016-09-01

    Full Text Available Transabdominal fetal pulse oximetry is an approach to measure oxygen saturation of the unborn child non-invasively. The principle of pulse oximetry is applied to the abdomen of a pregnant woman, such that the measured signal includes both, the maternal and the fetal pulse curve. One of the major challenges is to extract the shape of the fetal pulse curve from the mixed signal for computation of the oxygen saturation. In this paper we analyze the principle kind of connection of the fetal and maternal pulse curves in the measured signal. A time varying finite element model is used to rebuild the basic measurement environment, including a bulk tissue and two independently pulsing arteries to model the fetal and maternal blood circuit. The distribution of the light fluence rate in the model is computed by applying diffusion equation. From the detectors we extracted the time dependent fluence rate and analyzed the signal regarding its components. The frequency spectra of the signals show peaks at the fetal and maternal basic frequencies. Additional signal components are visible in the spectra, indicating multiplicative coupling of the fetal and maternal pulse curves. We conclude that the underlying signal model of algorithms for robust extraction of the shape of the fetal pulse curve, have to consider additive and multiplicative signal coupling.

  13. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Directory of Open Access Journals (Sweden)

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  14. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  15. An Aesthetic Interpretation of the Pilates Method: its principles and convergences with somatic education

    Directory of Open Access Journals (Sweden)

    Odilton José

    2014-12-01

    Full Text Available The Pilates method, originally called contrology, has been gaining a significant following in Brazil. This article discusses the method’s principles and convergences with somatic education by analyzing the original works of Joseph Pilates using an aesthetic-philosophical approach. It seems implicit that the Pilates method can be instrumental for the performing arts, and the article accordingly points to some connections in this regard. However, the article also argues that, in the absence of the guiding principles proposed by Pilates, the method ceases to be an art of control and instead is reduced to something not much different from other physical exercises.

  16. Forms of the cooperation principle in environmental law in the Federal Republic of Germany. Formen des Kooperationsprinzips im Umweltrecht der Bundesrepublik Deutschland

    Energy Technology Data Exchange (ETDEWEB)

    Mueggenborg, H J

    1990-01-01

    The cooperation principle is not a legal principle, as in the case of the social state principle or the principle of the democratic constitutional state. In order that the cooperation principle may not degenerate to cheating and accompliceship, legal limitations are necessary. Work on this subject from the point of view of legal science has started late. The contribution investigates the various forms of the cooperation principle: Technical control boards as instruments to relieve the state; committees under private and public law for setting up technical standards, advisory bodies of public administration; organized hearings; environment protection officers in industry. It also investigates the legal admissibility of concerted actions, advantages and disadvantages, effects on legal protection, legal and actual conditions. - The positive sides of cooperation should succeed in order that environmental protection may profit. (orig./HP).

  17. Recent studies of point defects by Huang scattering of x rays

    International Nuclear Information System (INIS)

    Maeta, Hiroshi

    1977-01-01

    Huang scattering allows the measurements of the symmetry and strength of point defects produced by irradiations and constitutes a very sensitive method for observing the clustering that occurs during irradiations or annealings. In the present review, the principles and characteristics of the Huang scattering and recent investigations using this technique are described. [J.Cryst.Soc.Japan 19,231(1977)] (auth.)

  18. Introducing Principles of Land Surveying by Assigning a Practical Project

    OpenAIRE

    Introducing Principles of Land Surveying by Assigning a Practical Project

    2016-01-01

    A practical project is used in an engineering surveying course to expose sophomore and junior civil engineering students to several important issues related to the use of basic principles of land surveying. The project, which is the design of a two-lane rural highway to connect between two arbitrary points, requires students to draw the profile of the proposed highway along with the existing ground level. Areas of all cross-sections are then computed to enable quantity computations between th...

  19. On Fermat's principle for causal curves in time oriented Finsler spacetimes

    Science.gov (United States)

    Gallego Torromé, Ricardo; Piccione, Paolo; Vitório, Henrique

    2012-12-01

    In this work, a version of Fermat's principle for causal curves with the same energy in time orientable Finsler spacetimes is proved. We calculate the second variation of the time arrival functional along a geodesic in terms of the index form associated with the Finsler spacetime Lagrangian. Then the character of the critical points of the time arrival functional is investigated and a Morse index theorem in the context of Finsler spacetime is presented.

  20. Schroedinger operators with point interactions and short range expansions

    International Nuclear Information System (INIS)

    Albeverio, S.; Hoeegh-Krohn, R.; Oslo Univ.

    1984-01-01

    We give a survey of recent results concerning Schroedinger operators with point interactions in R 3 . In the case where the point interactions are located at a discrete set of points we discuss results about the resolvent, the spectrum, the resonances and the scattering quantities. We also discuss the approximation of point interactions by short range local potentials (short range or low energy expansions) and the one electron model of a 3-dimensional crystal. Moreover we discuss Schroedinger operators with Coulomb plus point interactions, with applications to the determination of scattering lengths and of level shifts in mesic atoms. Further applications to the multiple well problem, to multiparticle systems, to crystals with random impurities, to polymers and quantum fields are also briefly discussed. (orig.)