WorldWideScience

Sample records for analog fixed maximum

  1. Analog Fixed Maximum Power Point Control for a PWM Step-downConverter for Water Pumping Installations

    DEFF Research Database (Denmark)

    Beltran, H.; Perez, E.; Chen, Zhe

    2009-01-01

    This paper describes a Fixed Maximum Power Point analog control used in a step-down Pulse Width Modulated power converter. The DC/DC converter drives a DC motor used in small water pumping installations, without any electric storage device. The power supply is provided by PV panels working around...

  2. Fixed-parameter tractability of the maximum agreement supertree problem.

    Science.gov (United States)

    Guillemot, Sylvain; Berry, Vincent

    2010-01-01

    Given a set L of labels and a collection of rooted trees whose leaves are bijectively labeled by some elements of L, the Maximum Agreement Supertree (SMAST) problem is given as follows: find a tree T on a largest label set L(') is included in L that homeomorphically contains every input tree restricted to L('). The problem has phylogenetic applications to infer supertrees and perform tree congruence analyses. In this paper, we focus on the parameterized complexity of this NP-hard problem, considering different combinations of parameters as well as particular cases. We show that SMAST on k rooted binary trees on a label set of size n can be solved in O((8n)k) time, which is an improvement with respect to the previously known O(n3k2) time algorithm. In this case, we also give an O((2k)pkn2) time algorithm, where p is an upper bound on the number of leaves of L missing in a SMAST solution. This shows that SMAST can be solved efficiently when the input trees are mostly congruent. Then, for the particular case where any triple of leaves is contained in at least one input tree, we give O(4pn3) and O(3:12p + n4) time algorithms, obtaining the first fixed-parameter tractable algorithms on a single parameter for this problem. We also obtain intractability results for several combinations of parameters, thus indicating that it is unlikely that fixed-parameter tractable algorithms can be found in these particular cases.

  3. 47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.

    Science.gov (United States)

    2010-10-01

    ...-Satellite Services. 25.211 Section 25.211 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions in the Fixed-Satellite Services. (a) Downlink analog video transmissions in the band 3700-4200...

  4. ASYMPTOTIC NORMALITY OF MAXIMUM QUASI-LIKELIHOOD ESTIMATORS IN GENERALIZED LINEAR MODELS WITH FIXED DESIGN

    Institute of Scientific and Technical Information of China (English)

    Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG

    2008-01-01

    In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.

  5. An Analog Implementation of Fixed-Wing Lateral/Directional Dynamics and Guidelines on Aircraft Simulations in the Engineering Laboratory.

    Science.gov (United States)

    Karayanakis, Nicholas M.

    1985-01-01

    Describes a scheme for the mechanization of fixed-wing, lateral/directional dynamics as demonstrated on the EAI 580 analog/hybrid system. A review of the complete six degrees of freedom program is included, along with useful guidelines of aircraft simulation in the engineering laboratory. (Author/JN)

  6. Fixed Ammonium Content and Maximum Capacity of Ammonium Fixation in Major Types of Tillage Soils in Hunan Province, China

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yang-zhu; HUANG Shun-hong; WAN Da-juan; HUANG Yun-xiang; ZHOU Wei-jun; ZOU Ying-bin

    2007-01-01

    In order to understand the status of fixed ammonium, fixed ammonium content, maximum capacity of ammonium fixation, and their influencing factors in major types of tillage soils of Hunan Province, China, were studied with sampling on fields, and laboratory incubation and determination. The main results are summarized as follows: (1) Content of fixed ammonium in the tested soils varies greatly with soil use pattern and the nature of parent material. For the paddy soils, it ranges from 135.4 ± 57.4 to 412.8±32.4 mg kg-1, with 304.7±96.7 mg kg-1 in average; while it ranges from 59.4 to 435.7 mg kg-1, with 230.1 ± 89.2 mg kg1 in average for the upland soils. The soils developed from limnic material and slate had higher fixed ammonium content than the soils developed from granite. The percentage of fixed ammonium to total N in the upland soils is always higher than that in the paddy soils. It ranges from 6.1 ± 3.6% to 16.6 ±4.6%, with 14.0% ± 5.1% in average for the paddy soils and it amounted to 5.8 ±2.0% to 40.1 ± 17.8%, with 23.5 ± 14.2% in average for upland soils. (2) The maximum capacity of ammonium fixation has the same trend with the fixed ammonium content in the tested soils. For all the tested soils, the percentage of recently fixed ammonium to maximum capacity of ammonium fixation is always bellow 20% and it may be due to the fact that the soils have high fertility and high saturation of ammonium-fixing site. (3) The clay content and clay composition in the tested soils are the two important factors influe ncing their fixed ammonium content and maximum capacity of ammonium fixation. The results showed that hydrous mica is the main 2:1 type clay mineral in <0.02 mm clay of the paddy soils, and its content in 0.02-0.002 mm clay is much higher than that in < 0.002 mm clay of the soils. The statistical analysis showed that both the fixed ammonium content and the maximum capacity of ammonium fixation of the paddy soils were positively correlated with

  7. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  8. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  9. A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation

    Directory of Open Access Journals (Sweden)

    Louis de Grange

    2014-06-01

    Full Text Available In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks.

  10. Impact and Mitigation of Multiantenna Analog Front-End Mismatch in Transmit Maximum Ratio Combining

    Science.gov (United States)

    Liu, Jian; Khaled, Nadia; Petré, Frederik; Bourdoux, André; Barel, Alain

    2006-12-01

    Transmit maximum ratio combining (MRC) allows to extend the range of wireless local area networks (WLANs) by exploiting spatial diversity and array gains. These gains, however, depend on the availability of the channel state information (CSI). In this perspective, an open-loop approach in time-division-duplex (TDD) systems relies on channel reciprocity between up- and downlink to acquire the CSI. Although the propagation channel can be assumed to be reciprocal, the radio-frequency (RF) transceivers may exhibit amplitude and phase mismatches between the up- and downlink. In this contribution, we present a statistical analysis to assess the impact of these mismatches on the performance of transmit-MRC. Furthermore, we propose a novel mixed-signal calibration scheme to mitigate these mismatches, which allows to reduce the implementation loss to as little as a few tenths of a dB. Finally, we also demonstrate the feasibility of the proposed calibration scheme in a real-time wireless MIMO-OFDM prototyping platform.

  11. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  12. Estimating the richness of a population when the maximum number of classes is fixed: a nonparametric solution to an archaeological problem.

    Directory of Open Access Journals (Sweden)

    Metin I Eren

    Full Text Available BACKGROUND: Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. METHODOLOGY/PRINCIPAL FINDINGS: Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known. We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. CONCLUSIONS/SIGNIFICANCE: Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes, or census information (e.g. nationality, religion, age, race.

  13. Efficacy and safety of travoprost 0.004%/timolol 0.5% fixed combination as transition therapy in patients previously on prostaglandin analog monotherapy

    Directory of Open Access Journals (Sweden)

    Costa VP

    2012-05-01

    Full Text Available Vital Paulino Costa1, Hamilton Moreira2, Mauricio Della Paolera3, Maria Rosa Bet de Moraes Silva41Universidade Estadual de Campinas – UNICAMP, São Paulo, 2Universidade Federal do Paraná, Curitiba, 3Santa Casa de Misericórdia de São Paulo, São Paulo, 4Faculdade de Medicina de Botucatu, UNESP, BrazilPurpose: To assess the safety and efficacy of transitioning patients whose intraocular pressure (IOP had been insufficiently controlled on prostaglandin analog (PGA monotherapy to treatment with travoprost 0.004%/timolol 0.5% fixed combination with benzalkonium chloride (TTFC.Methods: This prospective, multicenter, open-label, historical controlled, single-arm study transitioned patients who had primary open-angle glaucoma, pigment dispersion glaucoma, or ocular hypertension and who required further IOP reduction from PGA monotherapy to once-daily treatment with TTFC for 12 weeks. IOP and safety (adverse events, corrected distance visual acuity, and slit-lamp biomicroscopy were assessed at baseline, week 4, and week 12. A solicited ocular symptom survey was administered at baseline and at week 12. Patients and investigators reported their medication preference at week 12.Results: Of 65 patients enrolled, 43 had received prior travoprost therapy and 22 had received prior nontravoprost therapy (n = 18, bimatoprost; n = 4, latanoprost. In the total population, mean IOP was significantly reduced from baseline (P = 0.000009, showing a 16.8% reduction after 12 weeks of TTFC therapy. In the study subgroups, mean IOP was significantly reduced from baseline to week 12 (P = 0.0001 in the prior travoprost cohort (19.0% reduction and in the prior nontravoprost cohort (13.1% reduction. Seven mild, ocular, treatment-related adverse events were reported. Of the ten ocular symptom questions, eight had numerically lower percentages with TTFC compared with prior PGA monotherapy and two had numerically higher percentages with TTFC (dry eye symptoms and ocular

  14. Contributions of the secondary jet to the maximum tangential velocity and to the collection efficiency of the fixed guide vane type axial flow cyclone dust collector

    Science.gov (United States)

    Ogawa, Akira; Anzou, Hideki; Yamamoto, So; Shimagaki, Mituru

    2015-11-01

    In order to control the maximum tangential velocity Vθm(m/s) of the turbulent rotational air flow and the collection efficiency ηc (%) using the fly ash of the mean diameter XR50=5.57 µm, two secondary jet nozzles were installed to the body of the axial flow cyclone dust collector with the body diameter D1=99mm. Then in order to estimate Vθm (m/s), the conservation theory of the angular momentum flux with Ogawa combined vortex model was applied. The comparisons of the estimated results of Vθm(m/s) with the measured results by the cylindrical Pitot-tube were shown in good agreement. And also the estimated collection efficiencies ηcth (%) basing upon the cut-size Xc (µm) which was calculated by using the estimated Vθ m(m/s) and also the particle size distribution R(Xp) were shown a little higher values than the experimental results due to the re-entrainment of the collected dust. The best method for adjustment of ηc (%) related to the contribution of the secondary jet flow is principally to apply the centrifugal effect Φc (1). Above stated results are described in detail.

  15. Lunar Analog

    Science.gov (United States)

    Cromwell, Ronita L.

    2009-01-01

    In this viewgraph presentation, a ground-based lunar analog is developed for the return of manned space flight to the Moon. The contents include: 1) Digital Astronaut; 2) Bed Design; 3) Lunar Analog Feasibility Study; 4) Preliminary Data; 5) Pre-pilot Study; 6) Selection of Stockings; 7) Lunar Analog Pilot Study; 8) Bed Design for Lunar Analog Pilot.

  16. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  17. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  18. Analog earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, R.B. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.

  19. Fix 40!

    Index Scriptorium Estoniae

    2008-01-01

    Ansambel Fix peab 13. detsembril Tallinnas Saku Suurhallis oma 40. sünnipäeva. Kontserdi erikülaline on ansambel Apelsin, kaastegevad Jassi Zahharov ja HaleBopp Singers. Õhtut juhib Tarmo Leinatamm

  20. Maximum Fidelity

    CERN Document Server

    Kinkhabwala, Ali

    2013-01-01

    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  1. Learning by Analogy: Discriminating between Potential Analogs

    Science.gov (United States)

    Richland, Lindsey E.; McDonough, Ian M.

    2010-01-01

    The ability to successfully discriminate between multiple potentially relevant source analogs when solving new problems is crucial to proficiency in a mathematics domain. Experimental findings in two different mathematical contexts demonstrate that providing cues to support comparative reasoning during an initial instructional analogy, relative to…

  2. Intuitive analog circuit design

    CERN Document Server

    Thompson, Marc

    2013-01-01

    Intuitive Analog Circuit Design outlines ways of thinking about analog circuits and systems that let you develop a feel for what a good, working analog circuit design should be. This book reflects author Marc Thompson's 30 years of experience designing analog and power electronics circuits and teaching graduate-level analog circuit design, and is the ideal reference for anyone who needs a straightforward introduction to the subject. In this book, Dr. Thompson describes intuitive and ""back-of-the-envelope"" techniques for designing and analyzing analog circuits, including transistor amplifi

  3. The price of fixed income market volatility

    CERN Document Server

    Mele, Antonio

    2015-01-01

    Fixed income volatility and equity volatility evolve heterogeneously over time, co-moving disproportionately during periods of global imbalances and each reacting to events of different nature. While the methodology for options-based "model-free" pricing of equity volatility has been known for some time, little is known about analogous methodologies for pricing various fixed income volatilities. This book fills this gap and provides a unified evaluation framework of fixed income volatility while dealing with disparate markets such as interest-rate swaps, government bonds, time-deposits and credit. It develops model-free, forward looking indexes of fixed-income volatility that match different quoting conventions across various markets, and uncovers subtle yet important pitfalls arising from naïve superimpositions of the standard equity volatility methodology when pricing various fixed income volatilities. The ultimate goal of the authors´ efforts is to make interest rate volatility standardization a valuable...

  4. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  5. Structured Analog CMOS Design

    CERN Document Server

    Stefanovic, Danica

    2008-01-01

    Structured Analog CMOS Design describes a structured analog design approach that makes it possible to simplify complex analog design problems and develop a design strategy that can be used for the design of large number of analog cells. It intentionally avoids treating the analog design as a mathematical problem, developing a design procedure based on the understanding of device physics and approximations that give insight into parameter interdependences. The proposed transistor-level design procedure is based on the EKV modeling approach and relies on the device inversion level as a fundament

  6. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  7. Evidence of the Big Fix

    CERN Document Server

    Hamada, Yuta; Kawana, Kiyoharu

    2014-01-01

    We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value $v_{h}$. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self coupling are fixed when we vary $v_{h}$. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental low in our case.

  8. Evidence of the big fix

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2014-06-01

    We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value vh. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self-coupling are fixed when we vary vh. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental law in our case.

  9. FIXED POINTS THEOREMS IN MULTI-METRIC SPACES

    Directory of Open Access Journals (Sweden)

    Laurentiu I. Calmutchi

    2011-07-01

    Full Text Available The aim of the present article is to give some general methods inthe fixed point theory for mappings of general topological spaces. Using the notions of the multi-metric space and of the E-metric space, we proved the analogous of several classical theorems: Banach fixed point principle, Theorems of Edelstein, Meyers, Janos etc.

  10. Analogy in CLAM

    OpenAIRE

    Melis, Erica

    1999-01-01

    CL A M is a proof planner, developed by the Dream group in Edinburgh,that mainly operates for inductive proofs. This paper addresses the questionhow an analogy model that I developed independently of CL A M can beapplied to CL A M and it presents analogy-driven proof plan construction as acontrol strategy of CL A M . This strategy is realized as a derivational analogythat includes the reformulation of proof plans. The analogical replay checkswhether the reformulated justifications of the sour...

  11. Analog circuit design

    CERN Document Server

    Dobkin, Bob

    2012-01-01

    Analog circuit and system design today is more essential than ever before. With the growth of digital systems, wireless communications, complex industrial and automotive systems, designers are being challenged to develop sophisticated analog solutions. This comprehensive source book of circuit design solutions aids engineers with elegant and practical design techniques that focus on common analog challenges. The book's in-depth application examples provide insight into circuit design and application solutions that you can apply in today's demanding designs. <

  12. Analogies of Information Security

    OpenAIRE

    Sole, Amund Bauck

    2016-01-01

    In this thesis it will be tested wither analogies and metaphors would make it easier to teach the fundamental subjects of information security and hacking to people with no previous background in computer science and only basic computer skills. This will be done by conducting interview on people with no background in computer science to see what analogies work the best for different topics in information security. From the analogy getting the best response, a small game will be designed with ...

  13. Challenges in Using Analogies

    Science.gov (United States)

    Lin, Shih-Yin; Singh, Chandralekha

    2011-01-01

    Learning physics requires understanding the applicability of fundamental principles in a variety of contexts that share deep features. One way to help students learn physics is via analogical reasoning. Students can be taught to make an analogy between situations that are more familiar or easier to understand and another situation where the same…

  14. Hydraulic Capacitor Analogy

    Science.gov (United States)

    Baser, Mustafa

    2007-01-01

    Students have difficulties in physics because of the abstract nature of concepts and principles. One of the effective methods for overcoming students' difficulties is the use of analogies to visualize abstract concepts to promote conceptual understanding. According to Iding, analogies are consistent with the tenets of constructivist learning…

  15. Maximum Autocorrelation Factorial Kriging

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.

    2000-01-01

    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...

  16. Troubleshooting analog circuits

    CERN Document Server

    Pease, Robert A

    1991-01-01

    Troubleshooting Analog Circuits is a guidebook for solving product or process related problems in analog circuits. The book also provides advice in selecting equipment, preventing problems, and general tips. The coverage of the book includes the philosophy of troubleshooting; the modes of failure of various components; and preventive measures. The text also deals with the active components of analog circuits, including diodes and rectifiers, optically coupled devices, solar cells, and batteries. The book will be of great use to both students and practitioners of electronics engineering. Other

  17. Can mushrooms fix atmospheric nitrogen?

    Indian Academy of Sciences (India)

    H S Jayasinghearachchi; Gamini Seneviratne

    2004-09-01

    It is generally reported that fungi like Pleurotus spp. can fix nitrogen (N2). The way they do it is still not clear. The present study hypothesized that only associations of fungi and diazotrophs can fix N2. This was tested in vitro. Pleurotus ostreatus was inoculated with a bradyrhizobial strain nodulating soybean and P. ostreatus with no inoculation was maintained as a control. At maximum mycelial colonization by the bradyrhizobial strain and biofilm formation, the cultures were subjected to acetylene reduction assay (ARA). Another set of the cultures was evaluated for growth and nitrogen accumulation. Nitrogenase activity was present in the biofilm, but not when the fungus or the bradyrhizobial strain was alone. A significant reduction in mycelial dry weight and a significant increase in nitrogen concentration were observed in the inoculated cultures compared to the controls. The mycelial weight reduction could be attributed to C transfer from the fungus to the bradyrhizobial strain, because of high C cost of biological N2 fixation. This needs further investigations using 14C isotopic tracers. It is clear from the present study that mushrooms alone cannot fix atmospheric N2. But when they are in association with diazotrophs, nitrogenase activity is detected because of the diazotrophic N2 fixation. It is not the fungus that fixes N2 as reported earlier. Effective N2 fixing systems, such as the present one, may be used to increase protein content of mushrooms. Our study has implications for future identification of as yet unidentified N2 systems occurring in the environment.

  18. TV Analog Station Transmitters

    Data.gov (United States)

    Department of Homeland Security — This file is an extract from the Consolidated Database System (CDBS) licensed by the Media Bureau. It consists of Analog Television Stations (see Rule Part47 CFR...

  19. Analog multivariate counting analyzers

    CERN Document Server

    Nikitin, A V; Armstrong, T P

    2003-01-01

    Characterizing rates of occurrence of various features of a signal is of great importance in numerous types of physical measurements. Such signal features can be defined as certain discrete coincidence events, e.g. crossings of a signal with a given threshold, or occurrence of extrema of a certain amplitude. We describe measuring rates of such events by means of analog multivariate counting analyzers. Given a continuous scalar or multicomponent (vector) input signal, an analog counting analyzer outputs a continuous signal with the instantaneous magnitude equal to the rate of occurrence of certain coincidence events. The analog nature of the proposed analyzers allows us to reformulate many problems of the traditional counting measurements, and cast them in a form which is readily addressed by methods of differential calculus rather than by algebraic or logical means of digital signal processing. Analog counting analyzers can be easily implemented in discrete or integrated electronic circuits, do not suffer fro...

  20. Challenges in Analogical Reasoning

    CERN Document Server

    Lin, Shih-Yin

    2016-01-01

    Learning physics requires understanding the applicability of fundamental principles in a variety of contexts that share deep features. One way to help students learn physics is via analogical reasoning. Students can be taught to make an analogy between situations that are more familiar or easier to understand and another situation where the same physics principle is involved but that is more difficult to handle. Here, we examine introductory physics students' ability to use analogies in solving problems involving Newton's second law. Students enrolled in an algebra-based introductory physics course were given a solved problem involving tension in a rope and were then asked to solve another problem for which the physics is very similar but involved a frictional force. They were asked to point out the similarities between the two problems and then use the analogy to solve the friction problem.

  1. HAPS, a Handy Analog Programming System

    DEFF Research Database (Denmark)

    Højberg, Kristian Søe

    1975-01-01

    HAPS (Hybrid Analog Programming System) is an analog compiler that can be run on a minicomputer in an interactive mode. Essentially HAPS is written in FORTRAN. The equations to be programmed for an ana log computer are read in by using a FORTRAN-like notation. The input must contain maximum...... and minimum values for the variables. The output file includes potentiometer coefficients and static-test 'measuring values.' The file format is fitted to an automatic potentiometer-setting and static-test program. Patch instructions are printed by HAPS. The article describes the principles of HAPS...

  2. Synthesis of Paclitaxel Analogs

    OpenAIRE

    Xu, Zhibing

    2010-01-01

    Paclitaxel is one of the most successful anti-cancer drugs, particularly in the treatment of breast cancer and ovarian cancer. For the investigation of the interaction between paclitaxel and MD-2 protein, and development of new antagonists for lipopolysaccharide, several C10 A-nor-paclitaxel analogs have been synthesized and their biological activities have been evaluated. In order to reduce the myelosuppression effect of the paclitaxel, several C3â ² and C4 paclitaxel analogs have been synth...

  3. FGF growth factor analogs

    Science.gov (United States)

    Zamora, Paul O [Gaithersburg, MD; Pena, Louis A [Poquott, NY; Lin, Xinhua [Plainview, NY; Takahashi, Kazuyuki [Germantown, MD

    2012-07-24

    The present invention provides a fibroblast growth factor heparin-binding analog of the formula: ##STR00001## where R.sub.1, R.sub.2, R.sub.3, R.sub.4, R.sub.5, X, Y and Z are as defined, pharmaceutical compositions, coating compositions and medical devices including the fibroblast growth factor heparin-binding analog of the foregoing formula, and methods and uses thereof.

  4. Analog circuits cookbook

    CERN Document Server

    Hickman, Ian

    2013-01-01

    Analog Circuits Cookbook presents articles about advanced circuit techniques, components and concepts, useful IC for analog signal processing in the audio range, direct digital synthesis, and ingenious video op-amp. The book also includes articles about amplitude measurements on RF signals, linear optical imager, power supplies and devices, and RF circuits and techniques. Professionals and students of electrical engineering will find the book informative and useful.

  5. Electrical Circuits and Water Analogies

    Science.gov (United States)

    Smith, Frederick A.; Wilson, Jerry D.

    1974-01-01

    Briefly describes water analogies for electrical circuits and presents plans for the construction of apparatus to demonstrate these analogies. Demonstrations include series circuits, parallel circuits, and capacitors. (GS)

  6. Maximum Throughput in Multiple-Antenna Systems

    CERN Document Server

    Zamani, Mahdi

    2012-01-01

    The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...

  7. Smeared Gauge Fixing

    CERN Document Server

    Hetrick, J E; Forcrand, Ph. de

    1998-01-01

    We present a new method of gauge fixing to standard lattice Landau gauge, Max Re Tr $\\sum_{\\mu,x}U_{\\mu,x}$, in which the link configuration is recursively smeared; these smeared links are then gauge fixed by standard extremization. The resulting gauge transformation is simultaneously applied to the original links. Following this preconditioning, the links are gauge fixed again as usual. This method is free of Gribov copies, and we find that for physical parameters ($\\beta \\geq 2$ in SU(2)), it generally results in the gauge fixed configuration with the globally maximal trace. This method is a general technique for finding a unique minimum to global optimization problems.

  8. The production of body analogs for use in radiation physics.

    Science.gov (United States)

    Metcalfe, P E; Hoban, P W; Harper, N R; Murray, D C; Round, W H

    1990-09-01

    Bone, muscle and lung analog materials have been produced in-house, and dosimetry phantoms have been produced. A method using computed tomography (CT) has been developed to check that the analogs produced match the radiation properties of body tissues. The relative electron densities and ratio of electron cross sections are calculated from elemental compositions of the analogs. Using these data the theoretical CT numbers are calculated and these numbers are compared with experimental CT numbers for the analogs produced. The experimental CT numbers are found by scanning the samples on a Siemens DRH CT scanner. Results show the maximum difference between theoretical and experimental CT numbers for the analogs is 18 Hounsfield units, which relates to a delta NCT of less than 1%. Comparison of analog CT numbers with CT numbers for the related patient tissues also shows a close match.

  9. Measures of Noncircularity and Fixed Points of Contractive Multifunctions

    Directory of Open Access Journals (Sweden)

    Marrero Isabel

    2010-01-01

    Full Text Available In analogy to the Eisenfeld-Lakshmikantham measure of nonconvexity and the Hausdorff measure of noncompactness, we introduce two mutually equivalent measures of noncircularity for Banach spaces satisfying a Cantor type property, and apply them to establish a fixed point theorem of Darbo type for multifunctions. Namely, we prove that every multifunction with closed values, defined on a closed set and contractive with respect to any one of these measures, has the origin as a fixed point.

  10. Fixed-phase vs fixed-node quantum Monte Carlo with local and nonlocal interactions

    Science.gov (United States)

    Mitas, Lubos; Melton, Cody

    We study several systems that can be formulated in the fixed-phase and/or fixed-node framework in quantum Monte Carlo calculations. In particular, we try to understand the differences between the biases caused by these approximations that result from using complex vs real trial wave functions. One system is a model that enables us to construct systematically the same type of nodal errors in both real and complex formalism. The errors are comparably similar whenever trial functions are correspondingly accurate. Another aspect of the fixed-phase vs fixed-node approximations is studied for systems with nonlocal operators such as with pseudopotentials and/or spin-orbit effects. We specify how to obtain variational formulation for complex wave functions and nonlocal operators in a manner analogous to the fixed-node calculations with T-moves algorithm. In particular, we show that the fixed-phase/fixed-node is the primary condition for proving that the upper bound property holds.

  11. Digital and analog communication systems

    Science.gov (United States)

    Shanmugam, K. S.

    1979-01-01

    The book presents an introductory treatment of digital and analog communication systems with emphasis on digital systems. Attention is given to the following topics: systems and signal analysis, random signal theory, information and channel capacity, baseband data transmission, analog signal transmission, noise in analog communication systems, digital carrier modulation schemes, error control coding, and the digital transmission of analog signals.

  12. Analogical Reasoning in Geometry Education

    Science.gov (United States)

    Magdas, Ioana

    2015-01-01

    The analogical reasoning isn't used only in mathematics but also in everyday life. In this article we approach the analogical reasoning in Geometry Education. The novelty of this article is a classification of geometrical analogies by reasoning type and their exemplification. Our classification includes: analogies for understanding and setting a…

  13. Maximum Autocorrelation Factorial Kriging

    OpenAIRE

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete

    2000-01-01

    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...

  14. Maximum Estrada Index of Bicyclic Graphs

    CERN Document Server

    Wang, Long; Wang, Yi

    2012-01-01

    Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.

  15. Analogy, explanation, and proof.

    Science.gov (United States)

    Hummel, John E; Licato, John; Bringsjord, Selmer

    2014-01-01

    People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

  16. Quantum Analog Computing

    Science.gov (United States)

    Zak, M.

    1998-01-01

    Quantum analog computing is based upon similarity between mathematical formalism of quantum mechanics and phenomena to be computed. It exploits a dynamical convergence of several competing phenomena to an attractor which can represent an externum of a function, an image, a solution to a system of ODE, or a stochastic process.

  17. Are Scientific Analogies Metaphors?

    Science.gov (United States)

    1981-02-01

    psychospiritual processes. A more modern example of unclarified analogy is Freud’s (1973; reprinted from 1955) discussion of anal- eroticism , in which...299-304. Freud, S. On transformations of instinct as exemplified in anal eroticism . In J. Strachey (Ed.), The standard 37 edition of the complete

  18. High-resolution distributed sampling of bandlimited fields with fixed-precision sensors

    CERN Document Server

    Kumar, Animesh; Ramchandran, Kannan

    2007-01-01

    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an ...

  19. Optically fixed photorefractive correlator

    Institute of Scientific and Technical Information of China (English)

    刘友文; 刘立人; 周常河; 徐良瑛

    2002-01-01

    An optically fixed photorefractive correlator is presented, where two-centre non-volatile holographic recording isemployed to write and fix the matched filter in doubly doped LiNbO3 crystals. This correlator shows good correlationcharacteristics and insensitivity to the writing beam during readout. It can be used in cases requiring stability and notrequiring modification for a long period, and it is refreshed optically when new information needs to be registered.

  20. Fixed mobile convergence handbook

    CERN Document Server

    Ahson, Syed A

    2010-01-01

    From basic concepts to future directions, this handbook provides technical information on all aspects of fixed-mobile convergence (FMC). The book examines such topics as integrated management architecture, business trends and strategic implications for service providers, personal area networks, mobile controlled handover methods, SIP-based session mobility, and supervisory and notification aggregator service. Case studies are used to illustrate technical and systematic implementation of unified and rationalized internet access by fixed-mobile network convergence. The text examines the technolo

  1. Terrestrial Spaceflight Analogs: Antarctica

    Science.gov (United States)

    Crucian, Brian

    2013-01-01

    Alterations in immune cell distribution and function, circadian misalignment, stress and latent viral reactivation appear to persist during Antarctic winterover at Concordia Station. Some of these changes are similar to those observed in Astronauts, either during or immediately following spaceflight. Others are unique to the Concordia analog. Based on some initial immune data and environmental conditions, Concordia winterover may be an appropriate analog for some flight-associated immune system changes and mission stress effects. An ongoing smaller control study at Neumayer III will address the influence of the hypoxic variable. Changes were observed in the peripheral blood leukocyte distribution consistent with immune mobilization, and similar to those observed during spaceflight. Alterations in cytokine production profiles were observed during winterover that are distinct from those observed during spaceflight, but potentially consistent with those observed during persistent hypobaric hypoxia. The reactivation of latent herpesviruses was observed during overwinter/isolation, that is consistently associated with dysregulation in immune function.

  2. Analogy, Explanation, and Proof

    Directory of Open Access Journals (Sweden)

    John eHummel

    2014-11-01

    Full Text Available People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic whose truth was not even known prior to the existence of the explanation (proof. What do the cognitive operations underlying the (inductive inference that the milk is sour have in common with the (deductive proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This small-seeming difference poses a challenge to the task of marshaling our understanding of analogical reasoning in the service of understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

  3. Weak Scale From the Maximum Entropy Principle

    CERN Document Server

    Hamada, Yuta; Kawana, Kiyoharu

    2015-01-01

    The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\

  4. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  5. A Transiting Jupiter Analog

    CERN Document Server

    Kipping, David M; Henze, Chris; Teachey, Alex; Isaacson, Howard T; Petigura, Erik A; Marcy, Geoffrey W; Buchhave, Lars A; Chen, Jingjing; Bryson, Steve T; Sandford, Emily

    2016-01-01

    Decadal-long radial velocity surveys have recently started to discover analogs to the most influential planet of our solar system, Jupiter. Detecting and characterizing these worlds is expected to shape our understanding of our uniqueness in the cosmos. Despite the great successes of recent transit surveys, Jupiter analogs represent a terra incognita, owing to the strong intrinsic bias of this method against long orbital periods. We here report on the first validated transiting Jupiter analog, Kepler-167e (KOI-490.02), discovered using Kepler archival photometry orbiting the K4-dwarf KIC-3239945. With a radius of $(0.91\\pm0.02)$ $R_{\\mathrm{Jup}}$, a low orbital eccentricity ($0.06_{-0.04}^{+0.10}$) and an equilibrium temperature of $(131\\pm3)$ K, Kepler-167e bears many of the basic hallmarks of Jupiter. Kepler-167e is accompanied by three Super-Earths on compact orbits, which we also validate, leaving a large cavity of transiting worlds around the habitable-zone. With two transits and continuous photometric ...

  6. Inductive, Analogical, and Communicative Generalization

    Directory of Open Access Journals (Sweden)

    Adri Smaling

    2003-03-01

    Full Text Available Three forms of inductive generalization - statistical generalization, variation-based generalization and theory-carried generalization - are insufficient concerning case-to-case generalization, which is a form of analogical generalization. The quality of case-to-case generalization needs to be reinforced by setting up explicit analogical argumentation. To evaluate analogical argumentation six criteria are discussed. Good analogical reasoning is an indispensable support to forms of communicative generalization - receptive and responsive (participative generalization — as well as exemplary generalization.

  7. Maximum information photoelectron metrology

    CERN Document Server

    Hockett, P; Wollenhaupt, M; Baumert, T

    2015-01-01

    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  8. TOWARDS A MATHEMATICAL THEORY OF ANALOGY

    OpenAIRE

    Haraguchi, Makoto

    1985-01-01

    This paper presents a mathematical theory of analogy, which should be a basis in developing analogical reasoning by a computer. The analogy is a partial identity between two sets of facts. In order to compare several analogies, we introduce an ordering of analogies, and we define two types of optimal analogies, maximal analogies and greatest ones. We show a condition under which the greatest analogy exists, and also present a top-down procedure to find the maximal analogies.

  9. Fixing Dataset Search

    Science.gov (United States)

    Lynnes, Chris

    2014-01-01

    Three current search engines are queried for ozone data at the GES DISC. The results range from sub-optimal to counter-intuitive. We propose a method to fix dataset search by implementing a robust relevancy ranking scheme. The relevancy ranking scheme is based on several heuristics culled from more than 20 years of helping users select datasets.

  10. ESD analog circuits and design

    CERN Document Server

    Voldman, Steven H

    2014-01-01

    A comprehensive and in-depth review of analog circuit layout, schematic architecture, device, power network and ESD design This book will provide a balanced overview of analog circuit design layout, analog circuit schematic development, architecture of chips, and ESD design.  It will start at an introductory level and will bring the reader right up to the state-of-the-art. Two critical design aspects for analog and power integrated circuits are combined. The first design aspect covers analog circuit design techniques to achieve the desired circuit performance. The second and main aspect pres

  11. Common Fixed Points of Weakly Contractive and Strongly Expansive Mappings in Topological Spaces

    Directory of Open Access Journals (Sweden)

    Hussain N

    2010-01-01

    Full Text Available Using the notion of weakly -contractive mappings, we prove several new common fixed point theorems for commuting as well as noncommuting mappings on a topological space X. By analogy, we obtain a common fixed point theorem of mappings which are strongly -expansive on X.

  12. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  13. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  14. Energy-Efficient Large-Scale Antenna Systems with Hybrid Digital-Analog Beamforming Structure

    Institute of Scientific and Technical Information of China (English)

    Shuangfeng Han; ChihLin I; Zhikun Xu; Qi Sun; Haibin Li

    2015-01-01

    A large⁃scale antenna system (LSAS) with digital beamforming is expected to significantly increase energy efficiency (EE) and spectral efficiency (SE) in a wireless communication system. However, there are many challenging issues related to calibration, en⁃ergy consumption, and cost in implementing a digital beamforming structure in an LSAS. In a practical LSAS deployment, hybrid digital⁃analog beamforming structures with active antennas can be used. In this paper, we investigate the optimal antenna configu⁃ration in an N × M beamforming structure, where N is the number of transceivers, M is the number of active antennas per trans⁃ceiver, where analog beamforming is introduced for individual transceivers and digital beamforming is introduced across all N transceivers. We analyze the green point, which is the point of maximum EE on the EE⁃SE curve, and show that the log⁃scale EE scales linearly with SE along a slope of ⁃lg2/N. We investigate the effect of M on EE for a given SE value in the case of fixed NM and independent N and M. In both cases, there is a unique optimal M that results in optimal EE. In the case of independent N and M, there is no optimal (N, M) combination for optimizing EE. The results of numerical simulations are provided, and these re⁃sults support our analysis.

  15. Albert Einstein, Analogizer Extraordinaire

    CERN Document Server

    CERN. Geneva

    2007-01-01

    Where does deep insight in physics come from? It is tempting to think that it comes from the purest and most precise of reasoning, following ironclad laws of thought that compel the clear mind completely rigidly. And yet the truth is quite otherwise. One finds, when one looks closely at any major discovery, that the greatest of physicists are, in some sense, the most crazily daring and irrational of all physicists. Albert Einstein exemplifies this thesis in spades. In this talk I will describe the key role, throughout Albert Einstein's fabulously creative life, played by wild guesses made by analogy lacking any basis whatsoever in pure reasoning. In particular, in this year of 2007, the centenary of 1907, I will describe how over the course of two years (1905 through 1907) of pondering, Einstein slowly came, via analogy, to understand the full, radical consequences of the equation that he had first discovered and published in 1905, arguably the most famous equation of all time: E = mc2.

  16. The maximum number of minimal codewords in long codes

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.;

    2013-01-01

    Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981...

  17. Biomedical sensor design using analog compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  18. An Extension of Chebyshev’s Maximum Principle to Several Variables

    Institute of Scientific and Technical Information of China (English)

    Meng Zhao-liang; Luo Zhong-xuan

    2013-01-01

    In this article, we generalize Chebyshev’s maximum principle to several variables. Some analogous maximum formulae for the special integration functional are given. A sufficient condition of the existence of Chebyshev’s maximum principle is also obtained.

  19. Au Fixed Point Development at NRC

    Science.gov (United States)

    Dedyulin, S. N.; Gotoh, M.; Todd, A. D. W.

    2017-04-01

    Two Au fixed points filled using metal of different nominal purities in carbon crucibles have been developed at the National Research Council Canada (NRC). The primary motivation behind this project was to provide the means for direct thermocouple calibrations at the Au freezing point (1064.18°C). Using a Au fixed point filled with the metal of maximum available purity [99.9997 % pure according to glow discharge mass spectroscopy (GDMS)], multiple freezing plateaus were measured in a commercial high-temperature furnace. Four Pt/Pd thermocouples constructed and calibrated in-house were used to measure the freezing plateaus. From the calibration at Sn, Zn, Al and Ag fixed points, the linear deviation function from the NIST-IMGC reference function (IEC 62460:2008 Standard) was determined and extrapolated to the freezing temperature of Au. For all the Pt/Pd thermocouples used in this study, the measured EMF values agree with the extrapolated values within expanded uncertainty, thus substantiating the use of 99.9997 % pure Au fixed point cell for thermocouple calibrations at NRC. Using the Au fixed point filled with metal of lower purity (99.99 % pure according to GDMS), the effect of impurities on the Au freezing temperature measured with Pt/Pd thermocouple was further investigated.

  20. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  1. Fixed Sagittal Plane Imbalance

    OpenAIRE

    Savage, Jason W.; Patel, Alpesh A.

    2014-01-01

    Study Design Literature review. Objective To discuss the evaluation and management of fixed sagittal plane imbalance. Methods A comprehensive literature review was performed on the preoperative evaluation of patients with sagittal plane malalignment, as well as the surgical strategies to address sagittal plane deformity. Results Sagittal plane imbalance is often caused by de novo scoliosis or iatrogenic flat back deformity. Understanding the etiology and magnitude of sagittal malalignment is ...

  2. Fixed textile shutters

    Directory of Open Access Journals (Sweden)

    K.A. Chernova

    2010-06-01

    Full Text Available One of the main socio-economic problems in Russia is the high cost and the poor condition of housing.Such goals as cost reduction, reducing installation time and increasing the service life of structures are accomplishing by creating new technologies of erecting buildings and developing ways ofquickconstruction, using different types of fixed formwork. One of themis textstone.Textstone is an artificial construction stone, containing on the outer surface the reinforcing fine-mesh shell with multifunctional properties, formed by the interwoven threads of a vigorous fixed formwork textile material (basalt, linen, silica and other glass yarns adhered by binding material. The innovative construction technology of production and installation of a new generation of textstone buildings has been registered as a brand TextStone. The fundamental difference between texstone and reinforced concrete and all known building materials is that the whole outer surface of solidified light binders is protected by strong, vigorous and fixed formwork made from inexpensive textile materials. Manufacturing textile shells allows using it as an internal finishing material, reducing or eliminating the cost of finishing work.The use of fixed textile construction shutters during the construction of buildings has obvious technical, economic, operational, sanitary and environmental benefits: short construction time (from 3 to 10 days, compact packaging and light weight of fabric shells, high fire resistance, frost resistance, ease of engineering services installation in the hollow communicating shells; minimal amount of finishing, roofing, heat and noise insulation works. Texstone is a durable solid monolithic construction that provides high viability and earthquakes, hurricanes wind, solar sultriness and frost resistance. Material complies with all sanitary and environmental requirements. Due to such physical, mechanical, operational, sanitary and ecological characteristics

  3. The maximum rotation of a galactic disc

    CERN Document Server

    Bottema, R

    1997-01-01

    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...

  4. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  5. Vorticity in analog gravity

    Science.gov (United States)

    Cropp, Bethan; Liberati, Stefano; Turcati, Rodrigo

    2016-06-01

    In the analog gravity framework, the acoustic disturbances in a moving fluid can be described by an equation of motion identical to a relativistic scalar massless field propagating in curved space-time. This description is possible only when the fluid under consideration is barotropic, inviscid, and irrotational. In this case, the propagation of the perturbations is governed by an acoustic metric that depends algebrically on the local speed of sound, density, and the background flow velocity, the latter assumed to be vorticity-free. In this work we provide a straightforward extension in order to go beyond the irrotational constraint. Using a charged—relativistic and nonrelativistic—Bose-Einstein condensate as a physical system, we show that in the low-momentum limit and performing the eikonal approximation we can derive a d’Alembertian equation of motion for the charged phonons where the emergent acoustic metric depends on flow velocity in the presence of vorticity.

  6. Feedback in analog circuits

    CERN Document Server

    Ochoa, Agustin

    2016-01-01

    This book describes a consistent and direct methodology to the analysis and design of analog circuits with particular application to circuits containing feedback. The analysis and design of circuits containing feedback is generally presented by either following a series of examples where each circuit is simplified through the use of insight or experience (someone else’s), or a complete nodal-matrix analysis generating lots of algebra. Neither of these approaches leads to gaining insight into the design process easily. The author develops a systematic approach to circuit analysis, the Driving Point Impedance and Signal Flow Graphs (DPI/SFG) method that does not require a-priori insight to the circuit being considered and results in factored analysis supporting the design function. This approach enables designers to account fully for loading and the bi-directional nature of elements both in the feedback path and in the amplifier itself, properties many times assumed negligible and ignored. Feedback circuits a...

  7. Beginning analog electronics through projects

    CERN Document Server

    Singmin, Andrew

    2001-01-01

    Analog electronics is the simplest way to start a fun, informative, learning program. Beginning Analog Electronics Through Projects, Second Edition was written with the needs of beginning hobbyists and students in mind. This revision of Andrew Singmin's popular Beginning Electronics Through Projects provides practical exercises, building techniques, and ideas for useful electronics projects. Additionally, it features new material on analog and digital electronics, and new projects for troubleshooting test equipment.Published in the tradition of Beginning Electronics Through Projects an

  8. Mathematical problem solving by analogy.

    Science.gov (United States)

    Novick, L R; Holyoak, K J

    1991-05-01

    We report the results of 2 experiments and a verbal protocol study examining the component processes of solving mathematical word problems by analogy. College students first studied a problem and its solution, which provided a potential source for analogical transfer. Then they attempted to solve several analogous problems. For some problems, subjects received one of a variety of hints designed to reduce or eliminate the difficulty of some of the major processes hypothesized to be involved in analogical transfer. Our studies yielded 4 major findings. First, the process of mapping the features of the source and target problems and the process of adapting the source solution procedure for use in solving the target problem were clearly distinguished: (a) Successful mapping was found to be insufficient for successful transfer and (b) adaptation was found to be a major source of transfer difficulty. Second, we obtained direct evidence that schema induction is a natural consequence of analogical transfer. The schema was found to co-exist with the problems from which it was induced, and both the schema and the individual problems facilitated later transfer. Third, for our multiple-solution problems, the relation between analogical transfer and solution accuracy was mediated by the degree of time pressure exerted for the test problems. Finally, mathematical expertise was a significant predictor of analogical transfer, but general analogical reasoning ability was not. The implications of the results for models of analogical transfer and for instruction were considered.

  9. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  10. Generalized Maximum Entropy

    Science.gov (United States)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  11. [Analogies and analogy research in technical biology and bionics].

    Science.gov (United States)

    Nachtigall, Werner

    2010-01-01

    The procedural approaches of Technical Biology and Bionics are characterized, and analogy research is identified as their common basis. The actual creative aspect in bionical research lies in recognizing and exploiting technically oriented analogies underlying a specific biological prototype to indicate a specific technical application.

  12. A Novel cooked extruded lentils analog: physical and chemical properties.

    Science.gov (United States)

    Abu-Ghoush, Mahmoud; Alavi, Sajid; Al-Shathri, Abdulaziz

    2015-07-01

    Developing an extruded lentil analog is our aim. Lentil analog with six formulations were produced using a pilot-scale single (SS) and twin screw (TS) extruders. Texture analysis of lentil analogs prepared for consumption revealed that the products formulated with 60:40 and 70:30 soy: wheat ratios exhibited a significantly higher hardness, adhesiveness and lower springiness as compared to all other treatments. Differential Scanning Calorimeter (DSC) results indicated that all starches in dry blend are completely 100 % gelatinized by extrusion for all treatments at 100 °C. The maximum peak of viscosity for TS was formed after 5.58 min. from the run at 89.9 °C for the best treatment. However, this lentil analog product can provide a high quality lentil which can be used as a substitute for regular lentils.

  13. Fixed-Term Homotopy

    Directory of Open Access Journals (Sweden)

    Hector Vazquez-Leal

    2013-01-01

    Full Text Available A new tool for the solution of nonlinear differential equations is presented. The Fixed-Term Homotopy (FTH delivers a high precision representation of the nonlinear differential equation using only a few linear algebraic terms. In addition to this tool, a procedure based on Laplace-Padé to deal with the truncate power series resulting from the FTH method is also proposed. In order to assess the benefits of this proposal, two nonlinear problems are solved and compared against other semianalytic methods. The obtained results show that FTH is a power tool capable of generating highly accurate solutions compared with other methods of literature.

  14. Application of Op-amp Fixators in Analog Circuits

    Directory of Open Access Journals (Sweden)

    R. Rohith Krishnan

    2016-10-01

    Full Text Available Nullor elements have applications not only in analog behaviour modeling but also in analog circuit design and analysis. Fixator- orator pair, the emerging tool in analog design is a combination of a nullor and sources. A method for the realization of fixator- orator pair is discussed in this paper. Application of fixator-norator pair into a circuit makes it possible to perform the AC and DC designs in a linear like way. Fixator fixes a critical biasing spec at the design, whereas the pairing norator finds the value of power conducting components or DC sources that meets the design. A scaling amplifier design, an active load design and a CMOS differential amplifier design are provided as examples to demonstrate the procedure and the methodology.

  15. Fixed sagittal plane imbalance.

    Science.gov (United States)

    Savage, Jason W; Patel, Alpesh A

    2014-12-01

    Study Design Literature review. Objective To discuss the evaluation and management of fixed sagittal plane imbalance. Methods A comprehensive literature review was performed on the preoperative evaluation of patients with sagittal plane malalignment, as well as the surgical strategies to address sagittal plane deformity. Results Sagittal plane imbalance is often caused by de novo scoliosis or iatrogenic flat back deformity. Understanding the etiology and magnitude of sagittal malalignment is crucial in realignment planning. Objective parameters have been developed to guide surgeons in determining how much correction is needed to achieve favorable outcomes. Currently, the goals of surgery are to restore a sagittal vertical axis Sagittal plane malalignment is an increasingly recognized cause of pain and disability. Treatment of sagittal plane imbalance varies according to the etiology, location, and severity of the deformity. Fixed sagittal malalignment often requires complex reconstructive procedures that include osteotomy correction. Reestablishing harmonious spinopelvic alignment is associated with significant improvement in health-related quality-of-life outcome measures and patient satisfaction.

  16. Fixed and Sunk Costs Revisited.

    Science.gov (United States)

    Wang, X. Henry; Yang, Bill Z.

    2001-01-01

    Attempts to clarify the concepts of, and the link between, fixed costs and sunk costs. Argues that the root of confusion is the inconsistency in defining the term fixed costs. Consistently defines fixed and sunk costs, and describes how instructors must teach under these definitions. (RLH)

  17. Conjecturing via Reconceived Classical Analogy

    Science.gov (United States)

    Lee, Kyeong-Hwa; Sriraman, Bharath

    2011-01-01

    Analogical reasoning is believed to be an efficient means of problem solving and construction of knowledge during the search for and the analysis of new mathematical objects. However, there is growing concern that despite everyday usage, learners are unable to transfer analogical reasoning to learning situations. This study aims at facilitating…

  18. Musik som analogi og metafor

    DEFF Research Database (Denmark)

    2014-01-01

    Indeholder underkapitlerne: 2.5.1 Musik som analogi 2.5.2 Musik som metafor 2.5.3 Musikkens psykologiske funktioner - en taxonomi og metaforisk lytning til fire baroksatser......Indeholder underkapitlerne: 2.5.1 Musik som analogi 2.5.2 Musik som metafor 2.5.3 Musikkens psykologiske funktioner - en taxonomi og metaforisk lytning til fire baroksatser...

  19. Natural analog studies: Licensing perspective

    Energy Technology Data Exchange (ETDEWEB)

    Bradbury, J.W. [Nuclear Regulatory Commission, Washington, DC (United States)

    1995-09-01

    This report describes the licensing perspective of the term {open_quotes}natural analog studies{close_quotes} as used in CFR Part 60. It describes the misunderstandings related to its definition which has become evident during discussions at the U.S Nuclear Regulatory Commission meetings and tries to clarify the appropriate applications of natural analog studies to aspects of repository site characterization.

  20. Fabricating abutment crowns for existing removable partial dentures using custom resin clasp analogs.

    Science.gov (United States)

    Livaditis, G J

    1998-11-01

    A universal approach for fabricating abutment crowns for existing removable partial dentures is described. A replica (analog) of the clasp assembly is generated and transferred to a traditional working cast, which includes the abutment die. The analog is incorporated into the working cast as a removable component to allow the formation of the crown contours. The article reviews in detail the procedures required to transfer accurately all the essential components and information from the mouth to the working cast while allowing the patient uninterrupted use of the removable partial denture. Prestabilizing the removable partial denture, creating the analog impression, avoiding errors due to soft tissue components, forming a precise analog base, selecting materials, generating a rigid resin analog, and prescribing a path of insertion and withdrawal to the analog are described. The method replicates all types of clasps and can generate all types of fixed prosthodontic retainers to function harmoniously with the existing partial denture.

  1. Fixed Access Network Sharing

    Science.gov (United States)

    Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio

    2015-12-01

    Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.

  2. Safe switching from a pdFIX (Immunine®) to a rFIX (Bax326).

    Science.gov (United States)

    Solano Trujillo, M H; Stasyshyn, O; Rusen, L; Serban, M; Lamas, J L; Perina, F G; Urasinski, T; Oh, M; Knowlton, W B; Valenta-Singer, B; Pavlova, B G; Abbuehl, B

    2014-09-01

    The ability to switch between coagulation factors safely is of common interest to haemophilia patients and treating physicians. This is the first formal prospective comparative evaluation of safety, efficacy and incremental recovery of a plasma-derived FIX (pdFIX) and a recombinant FIX (rFIX) in the same haemophilia B patients following a switch from pdFIX Immunine® to a recently developed rFIX Bax326 product. Patients (aged <65 years) who completed a pretreatment study which prospectively documented the exposure to Immunine® and monitored FIX inhibitors while receiving prophylactic treatment were transitioned into pivotal (patients aged 12-65 years) and paediatric (patients aged <12 years) clinical studies investigating prophylaxis and treatment of bleeding episodes with Bax326. None of the 44 patients developed inhibitory or specific binding anti-FIX antibodies during the course of the studies. A total of 38 unrelated adverse events (AEs) were occurred in 20/44 (45.5%) subjects during the Immunine® study. Following a switch to Bax326, 51 AEs were reported in 25/44 (56.8%) subjects. The incidence of AEs related to Bax326 treatment (two episodes of dysgeusia in one patient) was low (2.3%); there were no serious adverse reactions. The comparison between Immunine® and Bax326 demonstrated analogous haemostatic characteristics and annualized bleeding rates. Overall, there is direct evidence indicating a safe and clinically effective transition from a pdFIX (Immunine®) to a newly developed rFIX (Bax326(1) ) for prophylaxis and treatment of bleeding in previously treated patients of all age cohorts with severe or moderately severe haemophilia B.

  3. Distributed Episodic and Analogical Reasoning (DEAR)

    Science.gov (United States)

    2010-04-01

    ends analysis Carbonell 1983 Modeling of Analogy Making Structure Mapping Theory (SMT) Gentner 1984 Agent based approach to analogy making...Mapping Engine (SME) Forbus 1990 Learning by analogy with larger domains Prodigy/Analogy Veloso and Carbonell 1991 Analogical Retrieval Engine MAC/FAC

  4. Analog-to-digital conversion

    CERN Document Server

    Pelgrom, Marcel J M

    2010-01-01

    The design of an analog-to-digital converter or digital-to-analog converter is one of the most fascinating tasks in micro-electronics. In a converter the analog world with all its intricacies meets the realm of the formal digital abstraction. Both disciplines must be understood for an optimum conversion solution. In a converter also system challenges meet technology opportunities. Modern systems rely on analog-to-digital converters as an essential part of the complex chain to access the physical world. And processors need the ultimate performance of digital-to-analog converters to present the results of their complex algorithms. The same progress in CMOS technology that enables these VLSI digital systems creates new challenges for analog-to-digital converters: lower signal swings, less power and variability issues. Last but not least, the analog-to-digital converter must follow the cost reduction trend. These changing boundary conditions require micro-electronics engineers to consider their design choices for...

  5. Molecular modeling of fentanyl analogs

    Directory of Open Access Journals (Sweden)

    LJILJANA DOSEN-MICOVIC

    2004-11-01

    Full Text Available Fentanyl is a highly potent and clinically widely used narcotic analgesic. A large number of its analogs have been synthesized, some of which (sufentanil and alfentanyl are also in clinical use. Theoretical studies, in recent years, afforded a better understanding of the structure-activity relationships of this class of opiates and allowed insight into the molecular mechanism of the interactions of fentanyl analogs with their receptors. An overview of the current computational techniques for modeling fentanyl analogs, their receptors and ligand-receptor interactions is presented in this paper.

  6. Sulfonimidamide analogs of oncolytic sulfonylureas.

    Science.gov (United States)

    Toth, J E; Grindey, G B; Ehlhardt, W J; Ray, J E; Boder, G B; Bewley, J R; Klingerman, K K; Gates, S B; Rinzel, S M; Schultz, R M; Weir, L C; Worzalla, J F

    1997-03-14

    A series of sulfonimidamide analogs of the oncolytic diarylsulfonylureas was synthesized and evaluated for (1) in vitro cytotoxicity against CEM cells, (2) in vivo antitumor activity against subaxillary implanted 6C3HED lymphosarcoma, and (3) metabolic breakdown to the o-sulfate of p-chloroaniline. The separated enantiomers of one sulfonimidamide analog displayed very different activities in the in vivo screening model. In general, several analogs demonstrated excellent growth inhibitory activity in the 6C3HED model when dosed orally or intraperitoneally. A correlative structure-activity relationship to the oncolytic sulfonylureas was not apparent.

  7. An Optimized Analogy-Based Project Effort Estimation

    Directory of Open Access Journals (Sweden)

    Mohammad Azzeh

    2014-05-01

    Full Text Available despite the predictive performance of Analogy-Based Estimation (ABE in generating better effort estimates, there is no consensus on: (1 how to predetermine the appropriate number of analogies, (2 which adjustment technique produces better estimates. Yet, there is no prior works attempted to optimize both number of analogies and feature distance weights for each test project. Perhaps rather than using fixed number, it is better to optimize this value for each project individually and then adjust the retrieved analogies by optimizing and approximating complex relationships between features and reflects that approximation on the final estimate. The Artificial Bees Algorithm is utilized to find, for each test project, the appropriate number of closest projects and features distance weights that is used to adjust those analogies’ efforts. The proposed technique has been applied and validated to 8 publically datasets from PROMISE repository. Results obtained show that: (1 the predictive performance of ABE has noticeably been improved, (2 the number of analogies was remarkably variable for each test project. While there are many techniques to adjust ABE, Using optimization algorithm provides two solutions in one technique and appeared useful for datasets with complex structure.

  8. Analog CMOS contrastive Hebbian networks

    Science.gov (United States)

    Schneider, Christian; Card, Howard

    1992-09-01

    CMOS VLSI circuits implementing an analog neural network with on-chip contrastive Hebbian learning and capacitive synaptic weight storage have been designed and fabricated. Weights are refreshed by periodic repetition of the training data. To evaluate circuit performance in a medium-sized system, these circuits were used to build a 132 synapse neural network. An adaptive neural system, such as the one described in this paper, can compensate for imperfections in the components from which it is constructed, and thus it is possible to build this type of system using simple, silicon area-efficient analog circuits. Because these analog VLSI circuits are far more compact than their digital counterparts, analog VLSI neural network implementations are potentially more efficient than digital ones.

  9. Solving a problem by analogy

    Science.gov (United States)

    Easton, Don

    1999-03-01

    This note is a description of a student solution to a problem. I found the solution exciting because it exemplifies the kind of solution by analogy that Feynman describes in The Feynman Lectures on Physics.

  10. Analog filters in nanometer CMOS

    CERN Document Server

    Uhrmann, Heimo; Zimmermann, Horst

    2014-01-01

    Starting from the basics of analog filters and the poor transistor characteristics in nanometer CMOS 10 high-performance analog filters developed by the authors in 120 nm and 65 nm CMOS are described extensively. Among them are gm-C filters, current-mode filters, and active filters for system-on-chip realization for Bluetooth, WCDMA, UWB, DVB-H, and LTE applications. For the active filters several operational amplifier designs are described. The book, furthermore, contains a review of the newest state of research on low-voltage low-power analog filters. To cover the topic of the book comprehensively, linearization issues and measurement methods for the characterization of advanced analog filters are introduced in addition. Numerous elaborate illustrations promote an easy comprehension. This book will be of value to engineers and researchers in industry as well as scientists and Ph.D students at universities. The book is also recommendable to graduate students specializing on nanoelectronics, microelectronics ...

  11. Analog versus digital: extrapolating from electronics to neurobiology.

    Science.gov (United States)

    Sarpeshkar, R

    1998-10-01

    We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.

  12. Analog electronic neural network circuits

    Energy Technology Data Exchange (ETDEWEB)

    Graf, H.P.; Jackel, L.D. (AT and T Bell Labs., Holmdel, NJ (USA))

    1989-07-01

    The large interconnectivity and moderate precision required in neural network models present new opportunities for analog computing. This paper discusses analog circuits for a variety of problems such as pattern matching, optimization, and learning. Most of the circuits build so far are relatively small, exploratory designs. The most mature circuits are those for template matching. Chips performing this function are now being applied to pattern recognition problems.

  13. Analog approach to mixed analog-digital circuit simulation

    Science.gov (United States)

    Ogrodzki, Jan

    2013-10-01

    Logic simulation of digital circuits is a well explored research area. Most up-to-date CAD tools for digital circuits simulation use an event driven, selective trace algorithm and Hardware Description Languages (HDL), e.g. the VHDL. This techniques enable simulation of mixed circuits, as well, where an analog part is connected to the digital one through D/A and A/D converters. The event-driven mixed simulation applies a unified, digital-circuits dedicated method to both digital and analog subsystems. In recent years HDL techniques have been also applied to mixed domains, as e.g. in the VHDL-AMS. This paper presents an approach dual to the event-driven one, where an analog part together with a digital one and with converters is treated as the analog subsystem and is simulated by means of circuit simulation techniques. In our problem an analog solver used yields some numerical problems caused by nonlinearities of digital elements. Efficient methods for overriding these difficulties have been proposed.

  14. Maximum life spiral bevel reduction design

    Science.gov (United States)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-07-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  15. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    Science.gov (United States)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  16. All-optical analog comparator

    Science.gov (United States)

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Zhao, Dongliang; Zhao, Yongpeng; Wang, Yuncai

    2016-08-01

    An analog comparator is one of the core units in all-optical analog-to-digital conversion (AO-ADC) systems, which digitizes different amplitude levels into two levels of logical ‘1’ or ‘0’ by comparing with a defined decision threshold. Although various outstanding photonic ADC approaches have been reported, almost all of them necessitate an electrical comparator to carry out this binarization. The use of an electrical comparator is in contradiction to the aim of developing all-optical devices. In this work, we propose a new concept of an all-optical analog comparator and numerically demonstrate an implementation based on a quarter-wavelength-shifted distributed feedback laser diode (QWS DFB-LD) with multiple quantum well (MQW) structures. Our results show that the all-optical comparator is very well suited for true AO-ADCs, enabling the whole digital conversion from an analog optical signal (continuous-time signal or discrete pulse signal) to a binary representation totally in the optical domain. In particular, this all-optical analog comparator possesses a low threshold power (several mW), high extinction ratio (up to 40 dB), fast operation rate (of the order of tens of Gb/s) and a step-like transfer function.

  17. Test Wiseness and Analogy Test Performance

    Science.gov (United States)

    Moore, James C.

    1971-01-01

    Subjects received self instruction on how to approach analogy questions. Instruction was directed toward knowledge of the general format of analogy questions in standarized tests and the 15 types of relationships commonly asked for in analogy questions. An analogies post-test showed a significant effect for the group. (Author)

  18. Improved parameterized complexity of the maximum agreement subtree and maximum compatible tree problems.

    Science.gov (United States)

    Berry, Vincent; Nicolas, François

    2006-01-01

    Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.

  19. Analog electronics for radiation detection

    CERN Document Server

    2016-01-01

    Analog Electronics for Radiation Detection showcases the latest advances in readout electronics for particle, or radiation, detectors. Featuring chapters written by international experts in their respective fields, this authoritative text: Defines the main design parameters of front-end circuitry developed in microelectronics technologies Explains the basis for the use of complementary metal oxide semiconductor (CMOS) image sensors for the detection of charged particles and other non-consumer applications Delivers an in-depth review of analog-to-digital converters (ADCs), evaluating the pros and cons of ADCs integrated at the pixel, column, and per-chip levels Describes incremental sigma delta ADCs, time-to-digital converter (TDC) architectures, and digital pulse-processing techniques complementary to analog processing Examines the fundamental parameters and front-end types associated with silicon photomultipliers used for single visible-light photon detection Discusses pixel sensors ...

  20. Analogy between gambling and measurement-based work extraction

    Science.gov (United States)

    Vinkler, Dror A.; Permuter, Haim H.; Merhav, Neri

    2016-04-01

    In information theory, one area of interest is gambling, where mutual information characterizes the maximal gain in wealth growth rate due to knowledge of side information; the betting strategy that achieves this maximum is named the Kelly strategy. In the field of physics, it was recently shown that mutual information can characterize the maximal amount of work that can be extracted from a single heat bath using measurement-based control protocols, i.e. using ‘information engines’. However, to the best of our knowledge, no relation between gambling and information engines has been presented before. In this paper, we briefly review the two concepts and then demonstrate an analogy between gambling, where bits are converted into wealth, and information engines, where bits representing measurements are converted into energy. From this analogy follows an extension of gambling to the continuous-valued case, which is shown to be useful for investments in currency exchange rates or in the stock market using options. Moreover, the analogy enables us to use well-known methods and results from one field to solve problems in the other. We present three such cases: maximum work extraction when the probability distributions governing the system and measurements are unknown, work extraction when some energy is lost in each cycle, e.g. due to friction, and an analysis of systems with memory. In all three cases, the analogy enables us to use known results in order to obtain new ones.

  1. All-optical analog comparator

    OpenAIRE

    Pu Li; Xiaogang Yi; Xianglian Liu; Dongliang Zhao; Yongpeng Zhao; Yuncai Wang

    2016-01-01

    An analog comparator is one of the core units in all-optical analog-to-digital conversion (AO-ADC) systems, which digitizes different amplitude levels into two levels of logical ‘1’ or ‘0’ by comparing with a defined decision threshold. Although various outstanding photonic ADC approaches have been reported, almost all of them necessitate an electrical comparator to carry out this binarization. The use of an electrical comparator is in contradiction to the aim of developing all-optical device...

  2. Varieties of noise: analogical reasoning in synthetic biology.

    Science.gov (United States)

    Knuuttila, Tarja; Loettgers, Andrea

    2014-12-01

    The picture of synthetic biology as a kind of engineering science has largely created the public understanding of this novel field, covering both its promises and risks. In this paper, we will argue that the actual situation is more nuanced and complex. Synthetic biology is a highly interdisciplinary field of research located at the interface of physics, chemistry, biology, and computational science. All of these fields provide concepts, metaphors, mathematical tools, and models, which are typically utilized by synthetic biologists by drawing analogies between the different fields of inquiry. We will study analogical reasoning in synthetic biology through the emergence of the functional meaning of noise, which marks an important shift in how engineering concepts are employed in this field. The notion of noise serves also to highlight the differences between the two branches of synthetic biology: the basic science-oriented branch and the engineering-oriented branch, which differ from each other in the way they draw analogies to various other fields of study. Moreover, we show that fixing the mapping between a source domain and the target domain seems not to be the goal of analogical reasoning in actual scientific practice.

  3. Singularity of Farhi-Gutmann Analog Quantum Search

    Institute of Scientific and Technical Information of China (English)

    LUO Shun-Long; ZHANG Zheng-Min

    2004-01-01

    We show that the Farhi-Gutmann analog quantum search is a singular algorithm in the following sense:when the original driving Hamiltonian is perturbed slightly such that it is made of projections to the starting state and to the target state with different energies, the maximum fidelity (transition probability) between the searching state and thetarget state is strictly less than 1 over the entire evolution period, and the first time to achieve this maximum fidelity is of order √n/√1+cN, whose behavior depends crucially on whether c = 0 or not (here N is the total number of items, and the original Farhi-Gutmann case corresponds to c = 0). Moreover, when c ≠ 0 and N tends to infinity, the maximum fidelity tends to zero, and the first time to achieve the maximum fidelity tends to a positive constant! The condition for guaranteeing the algorithm's efficiency is determined explicitly.

  4. Mathematical Analogy and Metaphorical Insight

    Science.gov (United States)

    Zwicky, Jan

    2010-01-01

    How are we to understand the power of certain literary metaphors? The author argues that the apprehension of good metaphors is importantly similar to the apprehension of fruitful mathematical analogies: both involve a structural realignment of vision. The author then explores consequences of this claim, drawing conceptually significant parallels…

  5. Geometrical Analogies in Mathematics Lessons

    Science.gov (United States)

    Eid, Wolfram

    2007-01-01

    A typical form of thinking to approach problem solutions humanly is thinking in analogous structures. Therefore school, especially mathematical lessons should help to form and to develop corresponding heuristic abilities of the pupils. In the contribution, a summary of possibilities of mathematics lessons regarding this shall particularly be…

  6. Schema Training in Analogical Reasoning.

    Science.gov (United States)

    Robins, Shani; Mayer, Richard E.

    1993-01-01

    In 3 experiments, 93, 97, and 86 college students, respectively, learned how to solve 20 verbal analogy problems and took transfer and memory tests. Results are inconsistent with active responding theory and further indicate that schema induction is maximized when the schemas are made salient and the cognitive system is not overloaded. (SLD)

  7. 49205 ANALOGE OG DIGITALE FILTRE

    DEFF Research Database (Denmark)

    Gaunholt, Hans

    1997-01-01

    Theese lecture notes treats the fundamental theory and the most commonly used design methods for passive- active and digital filters with special emphasis on microelectronic realizations. The lecture notes covers 75% of the material taught in the course 49205 Analog and Digital Filters...

  8. Analog Input Data Acquisition Software

    Science.gov (United States)

    Arens, Ellen

    2009-01-01

    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  9. International Alligator Rivers Analog Project

    Energy Technology Data Exchange (ETDEWEB)

    Bichard, G.F.

    1988-01-01

    The Australian Nuclear Science and Technology Organization (ANSTO), the Japan Atomic Energy Research Institute, the Swedish Nuclear Power Inspectorate, the U.K. Department of the Environment, the US Nuclear Regulatory Commission (NRC), and the Power Reactor and Nuclear Fuel Development Corporation of Japan are participating under the aegis of the Nuclear Energy Agency in the International Alligator Rivers Analog Project. The project has a duration of 3 yr, starting in 1988. The project has grown out of a research program on uranium ore bodies as analogs of high-level waste (HLW) repositories undertaken by ANSTO supported by the NRC. A primary objective of the project is to develop an approach to radionuclide transport model validation that may be used by the participants to support assessments of the safety of radioactive waste repositories. The approach involves integrating mathematical and physical modeling with hydrological and geochemical field and laboratory investigations of the analog site. The Koongarra uranium ore body has been chosen as the analog site because it has a secondary ore body that has formed over the past million years as a result of leaching by groundwater flowing through fractures in the primary ore body.

  10. National Radiological Fixed Lab Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Radiological Fixed Laboratory Data Asset includes data produced in support of various clients such as other EPA offices, EPA Regional programs, DOE,...

  11. Elevated Fixed Platform Test Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Elevated Fixed Platform (EFP) is a helicopter recovery test facility located at Lakehurst, NJ. It consists of a 60 by 85 foot steel and concrete deck built atop...

  12. Fixed Exchange Rates and Trade

    OpenAIRE

    Michael W. Klein; Jay C. Shambaugh

    2004-01-01

    A classic argument for a fixed exchange rate is its promotion of trade. Empirical support for this, however, is mixed. While one branch of research consistently shows a small negative effect of exchange rate volatility on trade, another, more recent, branch presents evidence of a large positive impact of currency unions on trade. This paper helps resolve this disconnect. Our results, which use a new data-based classification of fixed exchange rate regimes, show a large, significant effect of ...

  13. Hamiltonian system for orthotropic plate bending based on analogy theory

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on analogy between plane elasticity and plate bending as well as variational principles of mixed energy, Hamiltonian system is further led to orthotropic plate bending problems in this paper. Thus many effective methods of mathematical physics such as separation of variables and eigenfunction expansion can be employed in orthotropic plate bending problems as they are used in plane elasticity. Analytical solutions of rectangular plate are presented directly, which expands the range of analytical solutions. There is an essential distinction between this method and traditional semi-inverse method. Numerical results of orthotropic plate with two lateral sides fixed are included to demonstrate the effectiveness and accuracy of this method.

  14. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  15. Maximum margin Bayesian network classifiers.

    Science.gov (United States)

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  16. Analog circuit design art, science and personalities

    CERN Document Server

    Williams, Jim

    1991-01-01

    This book is far more than just another tutorial or reference guide - it's a tour through the world of analog design, combining theory and applications with the philosophies behind the design process. Readers will learn how leading analog circuit designers approach problems and how they think about solutions to those problems. They'll also learn about the `analog way' - a broad, flexible method of thinking about analog design tasks.A comprehensive and useful guide to analog theory and applications. Covers visualizing the operation of analog circuits. Looks at how to rap

  17. A Comparison Simulation of Fixed-fixed Type MEMS Switches

    Science.gov (United States)

    Rezazadeh, G.; Sadeghian, H.; Malekpour, E.

    2006-04-01

    In the present work pull-in voltage of fixed-fixed end type MEMS switches with variative electrostatic area has been calculated using a distributed model and applying a full nonlinear finite difference discretizing method. The governing nonlinear differential equation has been derived using of the variational principle for multi domain electromechanical coupled system. The numerical results of the beam with variative electrostatic area with the results of Coupled-Domain Finite Element method have been compared and very good agreement has been achieved.

  18. Holism and nonseparability by analogy

    Science.gov (United States)

    Arageorgis, Aristidis

    2013-08-01

    This paper explores the issues of holism and nonseparability in relativistic quantum field theory (QFT) by focusing on an analog of the typical model featuring in many discussions of holism and nonseparability in nonrelativistic quantum mechanics. It is argued that the quantum field theoretic model does exhibit holism in a metaphysical sense and that there are plausible grounds to view QFT holistic in an epistemological sense. However, the complexities arising from the fact that quantum fields have infinite degrees of freedom prohibit the exploitation of the elaborated analogy toward demonstrating that the QFT model exhibits the kind of state nonseparability familiar from ordinary quantum mechanics. Still, it is argued that the QFT model does satisfy a rather weak epistemological criterion for state nonseparability.

  19. Synaptic dynamics in analog VLSI.

    Science.gov (United States)

    Bartolozzi, Chiara; Indiveri, Giacomo

    2007-10-01

    Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.

  20. Mechanical Analogies of Fractional Elements

    Institute of Scientific and Technical Information of China (English)

    HU Kai-Xin; ZHU Ke-Qin

    2009-01-01

    A Fractional element model describes a special kind of viscoelastic material.Its stress is proportional to the fractional-order derivative of strain. Physically the mechanical analogies of fractional elements can be represented by spring-dashpot fractal networks. We introduce a constitutive operator in the constitutive equations of viscoelastic materials.To derive constitutive operators for spring-dashpot fractal networks, we use Heaviside operational calculus, which provides explicit answers not otherwise obtainable simply.Then the series-parallel formulas for the constitutive operator are derived. Using these formulas, a constitutive equation of fractional element with 1/2-order derivative is obtained.Finally we find the way to derive the constitutive equations with other fractional-order derivatives and their mechanical analogies.

  1. Analog-to-digital conversion

    CERN Document Server

    Pelgrom, Marcel

    2017-01-01

    This textbook is appropriate for use in graduate-level curricula in analog-to-digital conversion, as well as for practicing engineers in need of a state-of-the-art reference on data converters. It discusses various analog-to-digital conversion principles, including sampling, quantization, reference generation, nyquist architectures and sigma-delta modulation. This book presents an overview of the state of the art in this field and focuses on issues of optimizing accuracy and speed, while reducing the power level. This new, third edition emphasizes novel calibration concepts, the specific requirements of new systems, the consequences of 22-nm technology and the need for a more statistical approach to accuracy. Pedagogical enhancements to this edition include additional, new exercises, solved examples to introduce all key, new concepts and warnings, remarks and hints, from a practitioner’s perspective, wherever appropriate. Considerable background information and practical tips, from designing a PCB, to lay-o...

  2. Diclofenac Induced Fixed Drug Eruption

    Directory of Open Access Journals (Sweden)

    Dr. Umeshchandra C Honnaddi

    2016-02-01

    Full Text Available Background: Diclofenac is the most commonly used non-steroidal anti-inflammatory drug (NSAID for treating various inflammatory and painful conditions. It is generally well tolerated; gastric upset is the most common adverse effect. However very few cases of fixed drug eruptions were reported. Here we present a case of Diclofenac Induced Fixed Drug Eruption. A 62 year old male patient developed fixed drug eruptions with plaques on left thigh two days after receiving diclofenac for osteoarthritic pain. Other etiologies including insect bite, infections were ruled out. One week later after stopping the drug, the lesions were subsided. Diclofenac was strongly suspected as the casual drug. CD8+ effector T-cells have shown to play an important role. However it seems to be a reversible and drug related event. Although it is not life-threatening, fixed drug eruption can have significant effect on the quality of life of patients.Conclusion: Diclofenac is one of the most commonly prescribed NSAIDs by the Physicians. It is usually well tolerated, gastric upset is the most common adverse effect noted with this drug. This case is being reported to highlight a drug as safe as Diclofenac may also be associated with Fixed Drug Eruptions.

  3. Low Power CMOS Analog Multiplier

    Directory of Open Access Journals (Sweden)

    Shipra Sachan

    2015-12-01

    Full Text Available In this paper Low power low voltage CMOS analog multiplier circuit is proposed. It is based on flipped voltage follower. It consists of four voltage adders and a multiplier core. The circuit is analyzed and designed in 0.18um CMOS process model and simulation results have shown that, under single 0.9V supply voltage, and it consumes only 31.8µW quiescent power and 110MHZ bandwidth.

  4. Abolishing the maximum tension principle

    Directory of Open Access Journals (Sweden)

    Mariusz P. Da̧browski

    2015-09-01

    Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  5. Maximum efficiency of low-dissipation heat engines at arbitrary power

    Science.gov (United States)

    Holubec, Viktor; Ryabov, Artem

    2016-07-01

    We investigate maximum efficiency at a given power for low-dissipation heat engines. Close to maximum power, the maximum gain in efficiency scales as a square root of relative loss in power and this scaling is universal for a broad class of systems. For low-dissipation engines, we calculate the maximum gain in efficiency for an arbitrary fixed power. We show that engines working close to maximum power can operate at considerably larger efficiency compared to the efficiency at maximum power. Furthermore, we introduce universal bounds on maximum efficiency at a given power for low-dissipation heat engines. These bounds represent direct generalization of the bounds on efficiency at maximum power obtained by Esposito et al (2010 Phys. Rev. Lett. 105 150603). We derive the bounds analytically in the regime close to maximum power and for small power values. For the intermediate regime we present strong numerical evidence for the validity of the bounds.

  6. A global analog of Cheshire charge

    CERN Document Server

    McGraw, P

    1994-01-01

    It is shown that a model with a spontaneously broken global symmetry can support defects analogous to Alice strings, and a process analogous to Cheshire charge exchange can take place. A possible realization in superfluid He-3 is pointed out.

  7. The influence of fixed orthodontic appliances on halitosis.

    Science.gov (United States)

    Zurfluh, Monika A; van Waes, Hubertus J M; Filippi, Andreas

    2013-01-01

    Halitosis is a widely spread condition. There are numerous causes. The aim of this study was to investigate the influence of fixed orthodontic appliances on the occurrence of halitosis. 55 patients in an orthodontic practice were monitored at three points in time after application of orthodontic appliance (T1: immediately after application, T2: 4 weeks after application, T3: 3 months after application). Monitoring included patient self-evaluation, plaque index, tongue coating index and organoleptic measurement. The subjective parameters taste, dry mouth and breath odor did not show statistical differences. However, with the presence of fixed orthodontic appliances, confidence when performing dental hygiene decreased statistically significantly (p = 0.003). Additionally, the tongue coating index showed a statistically significant difference between T1 and T2 (p = 0.012) as well as T1 and T3 (p ⟩ 0.001). Analogous results were found for organoleptic measurement (T1 and T2 [p = 0.002]; T1 and T3 [p ⟨ 0.001]) and plaque index (T1 and T2/ T3 [p ⟨ 0.001]). Fixed orthodontic appliances lead to a statistically significant increase of the plaque and tongue coating indices. A statistically significant increase was also observed with organoleptic measurement scores. The suspected positive correlation between halitosis and fixed orthodontic appliances was confirmed. Halitosis can be an important indicator of oral health during orthodontic treatment and can serve as a motivating factor for adequate patient oral health care maintenance.

  8. Investigating visual analogies for visual insight problems

    OpenAIRE

    Corina Sas; Eric Luchian; Linden Ball

    2010-01-01

    Much research has focused on the impact of analogies in insight problem solving, but less work has investigated how the visual analogies for insight are actually constructed. Thus, it appears that in the search for their facilitative impact on the incubation effect, the understanding of what makes good visual analogies has somehow been lost. This paper presents preliminary work of constructing a set of 6 visual analogies and evaluating their impact on solving the visual problem of eight coins...

  9. Perfect and Imperfect Gauge Fixing

    CERN Document Server

    Shirzad, A

    2006-01-01

    Gauge fixing may be done in different ways. We show that using the chain structure to describe a constrained system, enables us to use either a perfect gauge, in which all gauged degrees of freedom are determined; or an imperfect gauge, in which some first class constraints remain as subsidiary conditions to be imposed on the solutions of the equations of motion. We also show that the number of constants of motion depends on the level in a constraint chain in which the gauge fixing condition is imposed. The relativistic point particle, electromagnetism and the Polyakov string are discussed as examples and perfect or imperfect gauges are distinguished.

  10. Fixed-point signal processing

    CERN Document Server

    Padgett, Wayne T

    2009-01-01

    This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory

  11. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  12. Hegel, Analogy, and Extraterrestrial Life

    Science.gov (United States)

    Ross, Joseph T.

    Georg Wilhelm Friedrich Hegel rejected the possibility of life outside of the Earth, according to several scholars of extraterrestrial life. Their position is that the solar system and specifically the planet Earth is the unique place in the cosmos where life, intelligence, and rationality can be. The present study offers a very different interpretation of Hegel's statements about the place of life on Earth by suggesting that, although Hegel did not believe that there were other solar systems where rationality is present, he did in fact suggest that planets in general, not the Earth exclusively, have life and possibly also intelligent inhabitants. Analogical syllogisms are superficial, according to Hegel, insofar as they try to conclude that there is life on the Moon even though there is no evidence of water or air on that body. Similar analogical arguments for life on the Sun made by Johann Elert Bode and William Herschel were considered by Hegel to be equally superficial. Analogical arguments were also used by astronomers and philosophers to suggest that life could be found on other planets in our solar system. Hegel offers no critique of analogical arguments for life on other planets, and in fact Hegel believed that life would be found on other planets. Planets, after all, have meteorological processes and therefore are "living" according to his philosophical account, unlike the Moon, Sun, and comets. Whereas William Herschel was already finding great similarities between the Sun and the stars and had extended these similarities to the property of having planets or being themselves inhabitable worlds, Hegel rejected this analogy. The Sun and stars have some properties in common, but for Hegel one cannot conclude from these similarities to the necessity that stars have planets. Hegel's arguments against the presence of life in the solar system were not directed against other planets, but rather against the Sun and Moon, both of which he said have a different

  13. Analogical Reasoning: A Review of the Literature.

    Science.gov (United States)

    Dawis, Rene V.; Siojo, Luis T.

    The mathematical and philosophical origins of "analogy" are described and their influence on the thinking of intelligence theorists is traced. Theories of intelligence and cognition bearing on analogical reasoning are examined, specifically those of Spearman, Thorndike, Guilford and Piaget. The analogy test item is shown to be a paradigm…

  14. Reasoning by Analogy in Constructing Mathematical Ideas.

    Science.gov (United States)

    English, Lyn D.

    A powerful way of understanding something new is by analogy with something already known. An analogy is defined as a mapping from one structure, which is already known (the base or source), to another structure that is to be inferred or discovered (the target). The research community has given considerable attention to analogical reasoning in the…

  15. Maximum Genus of Strong Embeddings

    Institute of Scientific and Technical Information of China (English)

    Er-ling Wei; Yan-pei Liu; Han Ren

    2003-01-01

    The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

  16. D(Maximum)=P(Argmaximum)

    CERN Document Server

    Remizov, Ivan D

    2009-01-01

    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  17. Landau Gauge Fixing on GPUs

    CERN Document Server

    Cardoso, Nuno; Bicudo, Pedro; Oliveira, Orlando

    2012-01-01

    In this paper we present and explore the performance of Landau gauge fixing in GPUs using CUDA. We consider the steepest descent algorithm with Fourier acceleration, and compare the GPU performance with a parallel CPU implementation. Using $32^4$ lattice volumes, we find that the computational power of a single Tesla C2070 GPU is equivalent to approximately 256 CPU cores.

  18. Flat coalgebraic fixed point logics

    CERN Document Server

    Schröder, Lutz

    2010-01-01

    Fixed point logics have a wide range of applications in computer science, in particular in artificial intelligence and concurrency. The most expressive logics of this type are the mu-calculus and its relatives. However, popular fixed point logics tend to trade expressivity for simplicity and readability, and in fact often live within the single variable fragment of the mu-calculus. The family of such flat fixed point logics includes, e.g., CTL, the *-nesting-free fragment of PDL, and the logic of common knowledge. Here, we extend this notion to the generic semantic framework of coalgebraic logic, thus covering a wide range of logics beyond the standard mu-calculus including, e.g., flat fragments of the graded mu-calculus and the alternating-time mu-calculus (such as ATL), as well as probabilistic and monotone fixed point logics. Our main results are completeness of the Kozen-Park axiomatization and a timed-out tableaux method that matches EXPTIME upper bounds inherited from the coalgebraic mu-calculus but avo...

  19. Fixed drug eruption to tartrazine.

    Science.gov (United States)

    Orchard, D C; Varigos, G A

    1997-11-01

    An 11-year-old girl with a recurrent fixed drug eruption to tartrazine on the dorsum of the left hand is presented. Oral provocation tests to both the suspected food, an artificially coloured cheese crisp, and to tartrazine were positive. This case highlights fire need to consider artificial flavours, colours and preservatives as potential culprits in classic drug eruptions.

  20. The Testability of Maximum Magnitude

    Science.gov (United States)

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.

    2012-12-01

    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  1. Alternative Multiview Maximum Entropy Discrimination.

    Science.gov (United States)

    Chao, Guoqing; Sun, Shiliang

    2016-07-01

    Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.

  2. Analog circuit design art, science, and personalities

    CERN Document Server

    Williams, Jim

    1991-01-01

    Analog Circuit Design: Art, Science, and Personalities discusses the many approaches and styles in the practice of analog circuit design. The book is written in an informal yet informative manner, making it easily understandable to those new in the field. The selection covers the definition, history, current practice, and future direction of analog design; the practice proper; and the styles in analog circuit design. The book also includes the problems usually encountered in analog circuit design; approach to feedback loop design; and other different techniques and applications. The text is

  3. Analog and mixed-signal electronics

    CERN Document Server

    Stephan, Karl

    2015-01-01

    A practical guide to analog and mixed-signal electronics, with an emphasis on design problems and applications This book provides an in-depth coverage of essential analog and mixed-signal topics such as power amplifiers, active filters, noise and dynamic range, analog-to-digital and digital-to-analog conversion techniques, phase-locked loops, and switching power supplies. Readers will learn the basics of linear systems, types of nonlinearities and their effects, op-amp circuits, the high-gain analog filter-amplifier, and signal generation. The author uses system design examples to motivate

  4. Practical analog electronics for technicians

    CERN Document Server

    Kimber, W A

    2013-01-01

    'Practical Analog Electronics for Technicians' not only provides an accessible introduction to electronics, but also supplies all the problems and practical activities needed to gain hands-on knowledge and experience. This emphasis on practice is surprisingly unusual in electronics texts, and has already gained Will Kimber popularity through the companion volume, 'Practical Digital Electronics for Technicians'. Written to cover the Advanced GNVQ optional unit in electronics, this book is also ideal for BTEC National, A-level electronics and City & Guilds courses. Together with 'Practical Digit

  5. A Multi-Gigahertz Analog Transient Recorder Integrated Circuit

    CERN Document Server

    Kleinfelder, Stuart A

    2015-01-01

    A monolithic multi-channel analog transient recorder, implemented using switched capacitor sample-and-hold circuits and a high-speed analogically-adjustable delay-line-based write clock, has been designed, fabricated and tested. The 2.1 by 6.9 mm layout, in 1.2 micron CMOS, includes over 31,000 transistors and 2048 double polysilicon capacitors. The circuit contains four parallel channels, each with a 512 deep switched-capacitor sample-and-hold system. A 512 deep edge sensitive tapped active delay line uses look-ahead and 16 way interleaving to develop the 512 sample and hold clocks, each as little as 3.2 ns wide and 200 ps apart. Measurements of the device have demonstrated 5 GHz maximum sample rate, at least 350 MHz bandwidth, an extrapolated rms aperture uncertainty per sample of 0.7 ps, and a signal to rms noise ratio of 2000:1.

  6. Stimulation of chemiluminescence by synthetic muramyl dipeptide and analogs.

    Science.gov (United States)

    Masihi, K N; Azuma, I; Brehmer, W; Lange, W

    1983-04-01

    The effect on respiratory burst of murine splenic cells after in vitro exposure to synthetic muramyl dipeptide (MDP) and 6-O-acyl and quinonyl derivatives was studied at an early phase of interaction by luminol-dependent chemiluminescence (CL) in response to stimulation by zymosan. The MDP molecule enhanced CL, but the degree of CL response varied with the kinds of fatty acids introduced in the chemical structure of synthetic glycopeptide analogs. A 6-O-acyl derivative possessing an alpha-branched fatty acid chain, B30-MDP, stimulated maximum levels of CL activity. High CL responses were obtained with L8-MDP having a short chain of linear fatty acids and with QS-10-MDP-66 containing a ubiquinone compound. CL was also stimulated by MDP and its analogs in the spleen cells of nude mice lacking mature T lymphocytes, but the extent of stimulation was decreased compared with that of normal spleen cells.

  7. Transfer Between Analogies: How Solving One Analogy Problem Helps to Solve Another

    OpenAIRE

    Keane, Mark T.

    1995-01-01

    This paper deals with transfer between analogies; with what people acquire from one analogy problem-solving episode that can be re-applied to a subsequent analogy, problem-solving episode. This issue must be resolved if we are to understand the nature of expertise and the appropriate use of analogy in education. There are two main explanations of what subjects acquire from an analogy problem-solving episode. The schema-induction hypothesis maintains that subjects acquire an abs...

  8. Using Analogical Problem Solving with Different Scaffolding Supports to Learn about Friction

    CERN Document Server

    Lin, Shih-Yin

    2016-01-01

    Prior research suggests that many students believe that the magnitude of the static frictional force is always equal to its maximum value. Here, we examine introductory students' ability to learn from analogical reasoning (with different scaffolding supports provided) between two problems that are similar in terms of the physics principle involved but one problem involves static friction, which often triggers the misleading notion. To help students process through the analogy deeply and contemplate whether the static frictional force was at its maximum value, students in different recitation classrooms received different scaffolding support. We discuss students' performance in different groups.

  9. Maximum-power quantum-mechanical Carnot engine.

    Science.gov (United States)

    Abe, Sumiyoshi

    2011-04-01

    In their work [J. Phys. A 33, 4427 (2000)], Bender, Brody, and Meister have shown by employing a two-state model of a particle confined in the one-dimensional infinite potential well that it is possible to construct a quantum-mechanical analog of the Carnot engine through changes of both the width of the well and the quantum state in a specific manner. Here, a discussion is developed about realizing the maximum power of such an engine, where the width of the well moves at low but finite speed. The efficiency of the engine at the maximum power output is found to be universal independently of any of the parameters contained in the model.

  10. Analog-to-digital conversion

    CERN Document Server

    Pelgrom, Marcel J. M

    2013-01-01

    This textbook is appropriate for use in graduate-level curricula in analog to digital conversion, as well as for practicing engineers in need of a state-of-the-art reference on data converters.  It discusses various analog-to-digital conversion principles, including sampling, quantization, reference generation, nyquist architectures and sigma-delta modulation.  This book presents an overview of the state-of-the-art in this field and focuses on issues of optimizing accuracy and speed, while reducing the power level. This new, second edition emphasizes novel calibration concepts, the specific requirements of new systems, the consequences of 45-nm technology and the need for a more statistical approach to accuracy.  Pedagogical enhancements to this edition include more than twice the exercises available in the first edition, solved examples to introduce all key, new concepts and warnings, remarks and hints, from a practitioner’s perspective, wherever appropriate.  Considerable background information and pr...

  11. Cacti with maximum Kirchhoff index

    OpenAIRE

    Wang, Wen-Rui; Pan, Xiang-Feng

    2015-01-01

    The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...

  12. Temporal variation of the total nitrogen concentration in aereal organs of nitrogen fixing and non fixing riparian species

    OpenAIRE

    Llinares, F.

    1992-01-01

    Changes in nitrogen concentration was determinated in samples of Alnus glutinosa, Elaeagnus angustifolia, Populus x canadiensis and Ailanthus altissima leaves, petioles and branches periodically during a year. Maximum nitrogen percentage was found in diazotrophic species (Alnus and Elaeagnus) and the nitrogen retranslocation form branches was higher (2.5 times) in no fixing species. Se estudian 10s cambios en la concentración de nitrógeno en Alnus glutinosa, Elaeagnus angustifolia, Populus...

  13. Statistical optimization for passive scalar transport: maximum entropy production vs. maximum Kolmogorov–Sinay entropy

    Directory of Open Access Journals (Sweden)

    M. Mihelich

    2014-11-01

    Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.

  14. On Fixing number of Functigraphs

    OpenAIRE

    Fazil, Muhammad; Javaid, Imran; Murtaza, Muhammad

    2016-01-01

    The fixing number of a graph $G$ is the order of the smallest subset $S$ of its vertex set $V(G)$ such that stabilizer of $S$ in $G$, $\\Gamma_{S}(G)$ is trivial. Let $G_{1}$ and $G_{2}$ be disjoint copies of a graph $G$, and let $g:V(G_{1})\\rightarrow V(G_{2})$ be a function. A functigraph $F_{G}$ consists of the vertex set $V(G_{1})\\cup V(G_{2})$ and the edge set $E(G_{1})\\cup E(G_{2})\\cup \\{uv:v=g(u)\\}$. In this paper, we study the behavior of the fixing number in passing from $G$ to $F_{G}...

  15. Oral rehabilitation with fixed prosthesis

    OpenAIRE

    Watanabe Velásquez, Romel; Dpto. Académico Estomatología Rehabilitadora. Facultad Odontología. UNMSM. Lima, Perú.; Salcedo Moncada, Doris; Dpto. Académico Estomatología Rehabilitadora. Facultad Odontología. UNMSM. Lima, Perú.; Ochoa Tataje, Julio; Dpto. Académico Estomatología Rehabilitadora. Facultad Odontología. UNMSM. Lima, Perú; Horna Palomino, Hernán; Dpto. Académico Estomatología Rehabilitadora. Facultad Odontología. UNMSM. Lima, Perú.; Herrera Cisneros, Marco; Dpto. Académico Estomatología Rehabilitadora. Facultad Odontología. UNMSM. Lima, Perú.; Paz Fernández, Juan José; Dpto. Académico Estomatología Rehabilitadora. Facultad Odontología. UNMSM. Lima, Perú.

    2014-01-01

    This treatment was carried out in a 58 years old male patient, who has had the antecedent of a car crash accident two years before; he loosed several teeth and fractured others. Pulp necrosis and alteration of occlusal plane occurred. Because of the pathologies presented, it was installed a multidisciplinary treatment phase having periodontal, endodontic, surgical and orthodontic procedures, tehe installation of posts, crowns and fixed bridges. Treatment was carried out for five months. El...

  16. A fixed-point farrago

    CERN Document Server

    Shapiro, Joel H

    2016-01-01

    This text provides an introduction to some of the best-known fixed-point theorems, with an emphasis on their interactions with topics in analysis. The level of exposition increases gradually throughout the book, building from a basic requirement of undergraduate proficiency to graduate-level sophistication. Appendices provide an introduction to (or refresher on) some of the prerequisite material and exercises are integrated into the text, contributing to the volume’s ability to be used as a self-contained text. Readers will find the presentation especially useful for independent study or as a supplement to a graduate course in fixed-point theory. The material is split into four parts: the first introduces the Banach Contraction-Mapping Principle and the Brouwer Fixed-Point Theorem, along with a selection of interesting applications; the second focuses on Brouwer’s theorem and its application to John Nash’s work; the third applies Brouwer’s theorem to spaces of infinite dimension; and the fourth rests ...

  17. Predicting species' maximum dispersal distances from simple plant traits.

    Science.gov (United States)

    Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis

    2014-02-01

    Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.

  18. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  19. Analog computing by Brewster effect.

    Science.gov (United States)

    Youssefi, Amir; Zangeneh-Nejad, Farzad; Abdollahramezani, Sajjad; Khavasi, Amin

    2016-08-01

    Optical computing has emerged as a promising candidate for real-time and parallel continuous data processing. Motivated by recent progresses in metamaterial-based analog computing [Science343, 160 (2014)SCIEAS0036-807510.1126/science.1242818], we theoretically investigate the realization of two-dimensional complex mathematical operations using rotated configurations, recently reported in [Opt. Lett.39, 1278 (2014)OPLEDP0146-959210.1364/OL.39.001278]. Breaking the reflection symmetry, such configurations could realize both even and odd Green's functions associated with spatial operators. Based on such an appealing theory and by using the Brewster effect, we demonstrate realization of a first-order differentiator. Such an efficient wave-based computation method not only circumvents the major potential drawbacks of metamaterials, but also offers the most compact possible device compared to conventional bulky lens-based optical signal and data processors.

  20. The strong maximum principle revisited

    Science.gov (United States)

    Pucci, Patrizia; Serrin, James

    In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.

  1. Priming analogical reasoning with false memories.

    Science.gov (United States)

    Howe, Mark L; Garner, Sarah R; Threadgold, Emma; Ball, Linden J

    2015-08-01

    Like true memories, false memories are capable of priming answers to insight-based problems. Recent research has attempted to extend this paradigm to more advanced problem-solving tasks, including those involving verbal analogical reasoning. However, these experiments are constrained inasmuch as problem solutions could be generated via spreading activation mechanisms (much like false memories themselves) rather than using complex reasoning processes. In three experiments we examined false memory priming of complex analogical reasoning tasks in the absence of simple semantic associations. In Experiment 1, we demonstrated the robustness of false memory priming in analogical reasoning when backward associative strength among the problem terms was eliminated. In Experiments 2a and 2b, we extended these findings by demonstrating priming on newly created homonym analogies that can only be solved by inhibiting semantic associations within the analogy. Overall, the findings of the present experiments provide evidence that the efficacy of false memory priming extends to complex analogical reasoning problems.

  2. Indigenous Fixed Nitrogen on Mars: Implications for Habitability

    Science.gov (United States)

    Stern, J. C.; Sutter, B.; Navarro-Gonzalez, R.; McKay, C. P.; Freissinet, C.; Archer, D., Jr.; Eigenbrode, J. L.; Mahaffy, P. R.; Conrad, P. G.

    2015-12-01

    Nitrate has been detected in Mars surface sediments and aeolian deposits by the Sample Analysis at Mars (SAM) instrument on the Mars Science Laboratory Curiosity rover (Stern et al., 2015). This detection is significant because fixed nitrogen is necessary for life, a requirement that drove the evolution of N-fixing metabolism in life on Earth. The question remains as to the extent to which a primitive N cycle ever developed on Mars, and whether N is currently being deposited on the martian surface at a non-negligible rate. It is also necessary to consider processes that could recycle oxidized N back into the atmosphere, and how these processes may have changed the soil inventory of N over time. The abundance of fixed nitrogen detected as NO from thermal decomposition of nitrate is consistent with both delivery of nitrate via impact generated thermal shock early in martian history and dry deposition from photochemistry of thermospheric NO, occurring in the present. Processes that could recycle N back into the atmosphere may include nitrate reduction by Fe(II) in aqueous environments on early Mars, impact decomposition, and/or UV photolysis. In order to better understand the history of nitrogen fixation on Mars, we look to cycling of N in Mars analog environments on Earth such as the Atacama Desert and the Dry Valleys of Antarctica. In particular, we examine the ratio of nitrate to perchlorate (NO3-/ClO4-) in these areas compared to those calculated from data acquired on Mars.

  3. Design and Analysis of Reconfigurable Analog System

    Science.gov (United States)

    2011-02-01

    34010010" �" �" �" �" �" �" �±" N3 N2 N± P1 P2 P3 * Current sources $RR = 1; *Ramp Rate (slope of the...2008/12/12/31e83bac-500f-4182- acca -4d360295fd9c.pdf, Analog Devices, Analog Dialogue 39-06, June 2005. [15] D. A. Johns, K. Martin "Analog Integrated

  4. The equivalence of minimum entropy production and maximum thermal efficiency in endoreversible heat engines.

    Science.gov (United States)

    Haseli, Y

    2016-05-01

    The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.

  5. Maximum-likelihood analysis of the COBE angular correlation function

    Science.gov (United States)

    Seljak, Uros; Bertschinger, Edmund

    1993-01-01

    We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.

  6. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  7. Robust hyperchaotic synchronization via analog transmission line

    Science.gov (United States)

    Sadoudi, S.; Tanougast, C.

    2016-02-01

    In this paper, a novel experimental chaotic synchronization technique via analog transmission is discussed. We demonstrate through Field-Programmable Gate Array (FPGA) implementation design the robust synchronization of two embedded hyperchaotic Lorenz generators interconnected with an analog transmission line. The basic idea of this work consists in combining a numerical generation of chaos and transmitting it with an analog signal. The numerical chaos allows to overcome the callback parameter mismatch problem and the analog transmission offers robust data security. As application, this technique can be applied to all families of chaotic systems including time-delayed chaotic systems.

  8. Fermilab accelerator control system: Analog monitoring facilities

    Energy Technology Data Exchange (ETDEWEB)

    Seino, K.; Anderson, L.; Smedinghoff, J.

    1987-10-01

    Thousands of analog signals are monitored in different areas of the Fermilab accelerator complex. For general purposes, analog signals are sent over coaxial or twinaxial cables with varying lengths, collected at fan-in boxes and digitized with 12 bit multiplexed ADCs. For higher resolution requirements, analog signals are digitized at sources and are serially sent to the control system. This paper surveys ADC subsystems that are used with the accelerator control systems and discusses practical problems and solutions, and it describes how analog data are presented on the console system.

  9. GOLD and the fixed ratio

    Directory of Open Access Journals (Sweden)

    Vestbo J

    2012-09-01

    Full Text Available Jørgen VestboUniversity of Manchester, Manchester, UKI read with interest the paper entitled "Diagnosis of airway obstruction in the elderly: contribution of the SARA study" by Sorino et al in a recent issue of this journal.1 Being involved in the Global Initiative for Obstructive Lung Diseases (GOLD, it is nice to see the interest sparked by the GOLD strategy document. However, in the paper by Sorino et al, there are a few misunderstandings around GOLD and the fixed ratio (forced expiratory volume in 1 second/forced volume vital capacity < 0.70 that need clarification.View original paper by Sorino and colleagues.

  10. $\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs

    CERN Document Server

    van de Geer, Sara

    2012-01-01

    We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.

  11. Maximum Matchings via Glauber Dynamics

    CERN Document Server

    Jindal, Anant; Pal, Manjish

    2011-01-01

    In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

  12. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    F W Giacobbe

    2003-03-01

    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  13. Maximum entropy production in daisyworld

    Science.gov (United States)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  14. Utilization of nitrogen fixing trees

    Energy Technology Data Exchange (ETDEWEB)

    Brewbaker, J.L.; Beldt, R. van den; MacDicken, K.; Budowski, G.; Kass, D.C.L.; Russo, R.O.; Escalante, G.; Herrera, R.; Aranguren, J.; Arkcoll, D.B.; Doebereinger, J. (cord.)

    1983-01-01

    Six papers from the symposium are noted. Brewbaker, J.L., Beldt, R. van den, MacDicken, K. Fuelwood uses and properties of nitrogen-fixing trees, pp 193-204, (Refs. 15). Includes a list of 35 nitrogen-fixing trees of high fuelwood value. Budowski, G.; Kass, D.C.L.; Russo, R.O. Leguminous trees for shade, pp 205-222, (Refs. 68). Escalante, G., Herrera, R., Aranguren, J.; Nitrogen fixation in shade trees (Erythrina poeppigiana) in cocoa plantations in northern Venezuela, pp 223-230, (Refs. 13). Arkcoll, D.B.; Some leguminous trees providing useful fruits in the North of Brazil, pp 235-239, (Refs. 13). This paper deals with Parkia platycephala, Pentaclethra macroloba, Swartzia sp., Cassia leiandra, Hymenaea courbaril, dipteryz odorata, Inga edulis, I. macrophylla, and I. cinnamonea. Baggio, A.J.; Possibilities of the use of Gliricidia sepium in agroforestry systems in Brazil, pp 241-243; (Refs. 15). Seiffert, N.F.; Biological nitrogen and protein production of Leucaena cultivars grown to supplement the nutrition of ruminants, pp 245-249, (Refs. 14). Leucaena leucocephala cv. Peru, L. campina grande (L. leucocephala), and L. cunningham (L. leucocephalae) were promising for use as browse by beef cattle in central Brazil.

  15. Fixed Target Collisions at STAR

    Science.gov (United States)

    Meehan, Kathryn C.

    2016-12-01

    The RHIC Beam Energy Scan (BES) program was proposed to look for the turn-off of signatures of the quark gluon plasma (QGP), search for a possible QCD critical point, and study the nature of the phase transition between hadronic and partonic matter. Previous results have been used to claim that the onset of deconfinement occurs at a center-of-mass energy of 7 GeV. Data from lower energies are needed to test if this onset occurs. The goal of the STAR Fixed-Target Program is to extend the collision energy range in BES II to energies that are likely below the onset of deconfinement. Currently, STAR has inserted a gold target into the beam pipe and conducted test runs at center-of-mass energies of 3.9 and 4.5 GeV. Tests have been done with both Au and Al beams. First physics results from a Coulomb potential analysis of Au + Au fixed-target collisions are presented and are found to be consistent with results from previous experiments. Furthermore, the Coulomb potential, which is sensitive to the Z of the projectile and degree of baryonic stopping, will be compared to published results from the AGS.

  16. Homotopies and the Universal Fixed Point Property

    DEFF Research Database (Denmark)

    Szymik, Markus

    2015-01-01

    A topological space has the fixed point property if every continuous self-map of that space has at least one fixed point. We demonstrate that there are serious restraints imposed by the requirement that there be a choice of fixed points that is continuous whenever the self-map varies continuously...

  17. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  18. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  19. Three optimized and validated (using accuracy profiles) LC methods for the determination of pentamidine and new analogs in rat plasma.

    Science.gov (United States)

    Hambÿe, S; Stanicki, D; Colet, J-M; Aliouat, E M; Vanden Eynde, J J; Blankert, B

    2011-01-15

    Three novel LC-UV methods for the determination of pentamidine (PTMD) and two of its new analogs in rat plasma are described. The chromatographic conditions (wavelength, acetonitrile percentage in the mobile phase, internal standard) were optimized to have an efficient selectivity. A pre-step of extraction was simultaneously developed for each compound. For PTMD, a solid phase extraction (SPE) with Oasis(®) HLB cartridges was selected, while for the analogs we used protein precipitation with acetonitrile. SPE for PTMD gave excellent results in terms of extraction yield (99.7 ± 2.8) whereas the recoveries for the analogs were not so high but were reproducible as well (64.6 ± 2.6 and 36.8 ± 1.6 for analog 1 and 2, respectively). By means of a recent strategy based on accuracy profiles (β-expectation tolerance interval), the methods were successfully validated. β was fixed at 95% and the acceptability limits at ± 15% as recommended by the FDA. The method was successfully validated for PTMD (29.6-586.54 ng/mL), analog 1 (74.23-742.3 ng/mL) and analog 2 (178.12-890.6 ng/mL). The first concentration level tested was considered as the LLOQ (lower limit of quantification) for PTMD and analog 1 whereas for analog 2, the LLOQ was not the first level tested and was raised to 178.12 ng/mL.

  20. A physical analogy to fuzzy clustering

    DEFF Research Database (Denmark)

    Jantzen, Jan

    2004-01-01

    This tutorial paper provides an interpretation of the membership assignment in the fuzzy clustering algorithm fuzzy c-means. The membership of a data point to several clusters is shown to be analogous to the gravitational forces between bodies of mass. This provides an alternative way to explain...... the algorithm to students. The analogy suggests a possible extension of the fuzzy membership assignment equation....

  1. PEMETAAN ANALOGI PADA KONSEP ABSTRAK FISIKA

    Directory of Open Access Journals (Sweden)

    Nyoto Suseno

    2014-11-01

    Full Text Available The research of any where founded majority students have common difficulties in abstract physics concept. The result of observation, lecturers have problem  in teaching implementation of abstract concepts on physics learning. The objective of this research is to find out the ways how to overcome this problem. The research place of  physics education programs and senior high school. The data are colected by quetionere, observation and interview. The lecturer behavior to making out this case is use of analogy to make concrete a abstract concept. This action is true, because the analogies are dynamic tools that facilitate understanding, rather than representations of the correct and static explanations. Using analogies not only promoted profound understanding of abstract concept, but also helped students overcome their misconceptions. However used analogy in teaching not yet planed with seriousness, analogy used spontanously with the result that less optimal. By planing and selecting right analogy, the role of analogy can be achieved the optimal result. Therefore, it is important to maping analogies of abstract consepts on physics learning.

  2. An Analog Computer for Electronic Engineering Education

    Science.gov (United States)

    Fitch, A. L.; Iu, H. H. C.; Lu, D. D. C.

    2011-01-01

    This paper describes a compact analog computer and proposes its use in electronic engineering teaching laboratories to develop student understanding of applications in analog electronics, electronic components, engineering mathematics, control engineering, safe laboratory and workshop practices, circuit construction, testing, and maintenance. The…

  3. A Mechanical Analogy for the Photoelectric Effect

    Science.gov (United States)

    Kovacevic, Milan S.; Djordjevich, Alexandar

    2006-01-01

    Analogy is a potent tool in the teacher's repertoire. It has been particularly well recognized in the teaching of science. However, careful planning is required for its effective application to prevent documented drawbacks when analogies are stretched too far. Befitting the occasion of the World Year of Physics commemorating Albert Einstein's 1905…

  4. Analogies in high school Brazilian chemistry textbooks

    Directory of Open Access Journals (Sweden)

    Rosária Justi

    2000-05-01

    Full Text Available This paper presents and discusses an analysis of the analogies presented by Brazilian chemistry textbooks for the medium level. The main aim of the analysis is to discuss whether such analogies can be said good teaching models. From the results, some aspects concerning with teachers' role are discussed. Finally, some new research questions are emphasised.

  5. Antibacterial and Antibiofilm Activities of Makaluvamine Analogs

    Directory of Open Access Journals (Sweden)

    Bhavitavya Nijampatnam

    2014-09-01

    Full Text Available Streptococcus mutans is a key etiological agent in the formation of dental caries. The major virulence factor is its ability to form biofilms. Inhibition of S. mutans biofilms offers therapeutic prospects for the treatment and the prevention of dental caries. In this study, 14 analogs of makaluvamine, a marine alkaloid, were evaluated for their antibacterial activity against S. mutans and for their ability to inhibit S. mutans biofilm formation. All analogs contained the tricyclic pyrroloiminoquinone core of makaluvamines. The structural variations of the analogs are on the amino substituents at the 7-position of the ring and the inclusion of a tosyl group on the pyrrole ring N of the makaluvamine core. The makaluvamine analogs displayed biofilm inhibition with IC50 values ranging from 0.4 μM to 88 μM. Further, the observed bactericidal activity of the majority of the analogs was found to be consistent with the anti-biofilm activity, leading to the conclusion that the anti-biofilm activity of these analogs stems from their ability to kill S. mutans. However, three of the most potent N-tosyl analogs showed biofilm IC50 values at least an order of magnitude lower than that of bactericidal activity, indicating that the biofilm activity of these analogs is more selective and perhaps independent of bactericidal activity.

  6. Compressed Sensing of Analog Signals

    CERN Document Server

    Eldar, Yonina C

    2008-01-01

    A traditional assumption underlying most data converters is that the signal should be sampled at a rate which exceeds twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals posses a sparse structure so that a large part of the bandwidth is not exploited. In this paper, we consider a framework for utilizing this sparsity in order to sample such analog signals at a low rate. More specifically, we consider continuous-time signals that lie in a shift-invariant (SI) space generated by m kernels, so that any signal in the space can be expressed as an infinite linear combination of the shifted kernels. If the period of the underlying SI space is equal to T, then such signals can be perfectly reconstructed from samples at a rate of m/T. Here we treat the case in which only k out of the m generators are active, meaning that the signal actually lies in a lower dimensional space spanned by k generators. However,...

  7. Novel Analog For Muscle Deconditioning

    Science.gov (United States)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob

    2011-01-01

    Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.

  8. An optical analog signal transmitter

    Energy Technology Data Exchange (ETDEWEB)

    Fudzita, K.; Itida, T.; Tanaka, Kh.

    1984-01-11

    An optical laser analog signal transmitter employing an amplitude modulated subcarrier is patented; this transmitter performs stable and high quality transmission of information signals over great distances. A feature of the proposed transmitter is a special transmitter operational mode in which the light emission reflected off the connection point to the fiber optic conduit is sent back to the laser diode in a transient period. As a result, the critical mode of the generated emission is not influenced by the reflected signal. The transmitter consists of a laser diode with biasing near the cutoff point, an amplitude modulator with a subcarrier frequency oscillator, a section of flexible fiber-optic cable of length L, which connects the laser diode to the primary optical fiber conduit, and the connector itself. The subcarrier frequency may vary over wide ranges to establish the necessary correlation between the length of the light conduit section L and the return propagation time of the reflected light signal from the connection point to the laser diode. The difference between the lasing time of the light signal and the return time to the laser diode of the signal reflected off the connector is determined by the relation tau equals 2nL/c - mtauc, where L is the length of the connecting section; n is the refractivity of the optical fiber; c is the velocity of light; tauc is the period of the high frequency subcarrier signal; and m is an integer.

  9. Demonstrative and non-demonstrative reasoning by analogy

    OpenAIRE

    Ippoliti, Emiliano

    2008-01-01

    The paper analizes a set of issues related to analogy and analogical reasoning, namely: 1) The problem of analogy and its duplicity; 2) The role of analogy in demonstrative reasoning; 3) The role of analogy in non-demonstrative reasoning; 4) The limits of analogy; 5) The convergence, particularly in multiple analogical reasoning, of these two apparently distinct aspects and its methodological and philosophical consequences. The paper, using example from number theory, argues for an heuristc c...

  10. Computational approaches to analogical reasoning current trends

    CERN Document Server

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  11. Enhancement of naked FIX minigene expression by chloroquine in mice

    Institute of Scientific and Technical Information of China (English)

    Hong-yan CHEN; Huan-zhang ZHU; Bin LU; Xuan XU; Ji-hua YAO; Qi SHEN; Jing-lun XUE

    2004-01-01

    AIM: To study the effect of chloroquine on the expression of human clotting factor IX (hFIX) in mice. METHODS:Hydrodynamics-based naked DNA plasmid administration was performed by tail vein injection of 10 μg of pCMVhFIX and chloroquine (0, 100, 200, and 500 μmol/L) in 2.2 mL of Ringer' solution within 6-7 s, the level and stability of hFIX expression, liver damage and toxicity were then examined. RESULTS: The maximum expression of hFIX level was 4.4±-1.8 mg/L at 8 h after injection, 9.7±1.6 mg/L at 24 h only existed in 200 μmol/L chloroquinetreated animals, which is 3-4 fold higher than that of control (P<0.01). There is no significant difference observed among all the treated groups, 3 d later. Transaminase level and liver histological study showed the damage of liver was not related to chloroquine (P>0.05). CONCLUSION: Chloroquine can enhance and sustain exogenous gene expression in vivo without side effect under our experimental conditions.

  12. A Digital Coreless Maximum Power Point Tracking Circuit for Thermoelectric Generators

    Science.gov (United States)

    Kim, Shiho; Cho, Sungkyu; Kim, Namjae; Baatar, Nyambayar; Kwon, Jangwoo

    2011-05-01

    This paper describes a maximum power point tracking (MPPT) circuit for thermoelectric generators (TEG) without a digital controller unit. The proposed method uses an analog tracking circuit that samples the half point of the open-circuit voltage without a digital signal processor (DSP) or microcontroller unit for calculating the peak power point using iterative methods. The simulation results revealed that the MPPT circuit, which employs a boost-cascaded-with-buck converter, handled rapid variation of temperature and abrupt changes of load current; this method enables stable operation with high power transfer efficiency. The proposed MPPT technique is a useful analog MPPT solution for thermoelectric generators.

  13. Simple fixed functional space maintainer.

    Science.gov (United States)

    Goenka, Puneet; Sarawgi, Aditi; Marwah, Nikhil; Gumber, Parvind; Dutta, Samir

    2014-01-01

    Premature loss of a primary tooth is one of the most common etiology for malocclusion. Space maintainers are employed to prevent this complication. In anterior region, esthetics is an important concern along with function and space management. Fiber-reinforced composite (FRC) retained space maintainer solves all these purposes ef ficiently and ef fectively. In addition, the technique is simple and the appliance is very comfortable inside the oral cavity. Here is a case of premature loss of anterior primary tooth which was replaced by FRC retained esthetic functional space maintainer. The appliance was found to be functioning satisfactorily inside the oral cavity till the last visit (1 Year). How to cite this article: Goenka P, Sarawgi A, Marwah N, Gumber P, Dutta S. Simple Fixed Functional Space Maintainer. Int J Clin Pediatr Dent 2014;7(3):225-228.

  14. Fixed Point and Aperiodic Tilings

    CERN Document Server

    Durand, Bruno; Shen, Alexander

    2008-01-01

    An aperiodic tile set was first constructed by R.Berger while proving the undecidability of the domino problem. It turned out that aperiodic tile sets appear in many topics ranging from logic (the Entscheidungsproblem) to physics (quasicrystals) We present a new construction of an aperiodic tile set that is based on Kleene's fixed-point construction instead of geometric arguments. This construction is similar to J. von Neumann self-reproducing automata; similar ideas were also used by P. Gacs in the context of error-correcting computations. The flexibility of this construction allows us to construct a ``robust'' aperiodic tile set that does not have periodic (or close to periodic) tilings even if we allow some (sparse enough) tiling errors. This property was not known for any of the existing aperiodic tile sets.

  15. Complex issues in accounting for fixed assets

    Directory of Open Access Journals (Sweden)

    Mykhaylo Luchko

    2013-11-01

    Full Text Available This paper considers the mentioned complex issues of business fixed assets accounting. The main emphasis in this case refers to the change in value of fixed assets over time. Author studied problems of the regulatory revaluation of fixed assets in accordance with the applicable accounting standards (regulations. Particular attention is paid to the revaluation and impairment of the business fixed assets objects and their recording on accounts. Author analyzed the complex issues in establishing the fair value of fixed assets. Attention is focused on the harmonization of accounting information with tax calculations and reporting provided by the Ukrainian Tax Code. In considering the matter referred to the tax differences (temporary and permanent. Established tax differences in depreciation of fixed assets, impairment amounts and revaluation of fixed assets and accounting entries to them.

  16. An Analog Earth Climate Model

    Science.gov (United States)

    Varekamp, J. C.

    2010-12-01

    The earth climate is broadly governed by the radiative power of the sun as well as the heat retention and convective cooling of the atmosphere. I have constructed an analog earth model for an undergraduate climate class that simulates mean climate using these three parameters. The ‘earth’ is a hollow, black, bronze sphere (4 cm diameter) mounted on a thin insulated rod, and illuminated by two opposite optic fibers, with light focused on the sphere by a set of lenses. The sphere is encased in a large double-walled aluminum cylinder (34 cm diameter by 26 cm high) with separate water cooling jackets at the top, bottom, and sides. The cylinder can be filled with a gas of choice at a variety of pressures or can be run in vacuum. The exterior is cladded with insulation, and the temperature of the sphere, atmosphere and walls is monitored with thermocouples. The temperature and waterflow of the three cooling jackets can be monitored to establish the energy output of the whole system; the energy input is the energy yield of the two optic fibers. A small IR transmissive lens at the top provides the opportunity to hook up the fiber of a hyper spectrometer to monitor the emission spectrum of the black ‘earth’ sphere. A pressure gauge and gas inlet-outlet system for flushing of the cell completes it. The heat yield of the cooling water at the top is the sum of the radiative and convective components, whereas the bottom jacket only carries off the radiative heat of the sphere. Undergraduate E&ES students at Wesleyan University have run experiments with dry air, pure CO2, N2 and Ar at 1 atmosphere, and a low vacuum run was accomplished to calibrate the energy input. For each experiment, the lights are flipped on, the temperature acquisition routine is activated, and the sphere starts to warm up until an equilibrium temperature has been reached. The lights are then flipped off and the cooling sequence towards ambient is registered. The energy input is constant for a given

  17. Analog regulation of metabolic demand

    Directory of Open Access Journals (Sweden)

    Muskhelishvili Georgi

    2011-03-01

    Full Text Available Abstract Background The 3D structure of the chromosome of the model organism Escherichia coli is one key component of its gene regulatory machinery. This type of regulation mediated by topological transitions of the chromosomal DNA can be thought of as an analog control, complementing the digital control, i.e. the network of regulation mediated by dedicated transcription factors. It is known that alterations in the superhelical density of chromosomal DNA lead to a rich pattern of differential expressed genes. Using a network approach, we analyze these expression changes for wild type E. coli and mutants lacking nucleoid associated proteins (NAPs from a metabolic and transcriptional regulatory network perspective. Results We find a significantly higher correspondence between gene expression and metabolism for the wild type expression changes compared to mutants in NAPs, indicating that supercoiling induces meaningful metabolic adjustments. As soon as the underlying regulatory machinery is impeded (as for the NAP mutants, this coherence between expression changes and the metabolic network is substantially reduced. This effect is even more pronounced, when we compute a wild type metabolic flux distribution using flux balance analysis and restrict our analysis to active reactions. Furthermore, we are able to show that the regulatory control exhibited by DNA supercoiling is not mediated by the transcriptional regulatory network (TRN, as the consistency of the expression changes with the TRN logic of activation and suppression is strongly reduced in the wild type in comparison to the mutants. Conclusions So far, the rich patterns of gene expression changes induced by alterations of the superhelical density of chromosomal DNA have been difficult to interpret. Here we characterize the effective networks formed by supercoiling-induced gene expression changes mapped onto reconstructions of E. coli's metabolic and transcriptional regulatory network. Our

  18. Namibian Analogs To Titan Dunes

    Science.gov (United States)

    Wall, Stephen D.; Lopes, R.; Kirk, R.; Stofan, E.; Farr, T.; Van der Ploeg, P.; Lorenz, R.; Radebaugh, J.

    2009-09-01

    Titan's equatorial dunes, observed in Cassini SAR, have been described as longitudinal, similar to longitudinal dunes in the Namib sand sea in southern Africa. Their "Y” junctions and the way they divert around topography are used as evidence of equatorial wind flow direction. In two instances of such diversion they exhibit overlying or crosshatched patterns in two distinct directions that have been interpreted as a transition to transverse dunes. Here we describe field observations of the Namibian dunes and these comparisons, we present images of the dunes from terrestrial SAR missions, and we discuss implications to both the Titan dunes and the wind regime that created them. Selected portions of the Namibian dunes resemble Titan's dunes in peak-to-peak distance and length. They are morphologically similar to Titan, and specific superficial analogs are common, but they also differ. For example, when Titan dunes encounter topography they either terminate abruptly, "climb” the upslope, or divert around; only the latter behavior is seen in remote sensing images of Namibia. Namib linear dunes do transition to transverse as they divert, but at considerably smaller wavelength, while at Titan the wavelengths are of the same scale. Crosshatching of similar-wavelength dunes does occur in Namibia, but not near obstacles. Many additional aeolian features that are seen at Namibia such as star dunes, serpentine ridges and scours have not been detected on Titan, although they might be below the Cassini SAR's 300-m resolution. These similarities and differences allow us to explore mechanisms of Titan dune formation, in some cases giving us clues as to what larger scale evidence to look for in SAR images. Viewed at similar resolution, they provide interesting comparisons with the Titan dunes, both in likeness and differences. A part of this work was carried out at JPL under contract with NASA.

  19. Analogies: Explanatory Tools in Web-Based Science Instruction

    Science.gov (United States)

    Glynn, Shawn M.; Taasoobshirazi, Gita; Fowler, Shawn

    2007-01-01

    This article helps designers of Web-based science instruction construct analogies that are as effective as those used in classrooms by exemplary science teachers. First, the authors explain what analogies are, how analogies foster learning, and what form analogies should take. Second, they discuss science teachers' use of analogies. Third, they…

  20. Advances in Analog Circuit Design 2015

    CERN Document Server

    Baschirotto, Andrea; Harpe, Pieter

    2016-01-01

    This book is based on the 18 tutorials presented during the 24th workshop on Advances in Analog Circuit Design. Expert designers present readers with information about a variety of topics at the frontier of analog circuit design, including low-power and energy-efficient analog electronics, with specific contributions focusing on the design of efficient sensor interfaces and low-power RF systems. This book serves as a valuable reference to the state-of-the-art, for anyone involved in analog circuit research and development. ·         Provides a state-of-the-art reference in analog circuit design, written by experts from industry and academia; ·         Presents material in a tutorial-based format; ·         Includes coverage of high-performance analog-to-digital and digital to analog converters, integrated circuit design in scaled technologies, and time-domain signal processing.

  1. A computational model of analogical reasoning

    Institute of Scientific and Technical Information of China (English)

    李波; 赵沁平

    1997-01-01

    A computational model of analogical reasoning is presented, which divides analogical reasoning process into four subprocesses, i.e. reminding, elaboration, matching and transfer. For each subprocess, its role and the principles it follows are given. The model is discussed in detail, including salient feature-based reminding, relevance-directed elaboration, an improved matching model and a transfer model. And the advantages of this model are summarized based on the results of BHARS, which is an analogical reasoning system implemented by this model.

  2. Analogies in science education: contributions and challenges

    Directory of Open Access Journals (Sweden)

    Maria da Conceição Duarte

    2005-03-01

    Full Text Available An analogy is a comparison between domains of knowledge that have similarities at the levels of characteristics and relationships. Several authors highlight the importance of this tool in the teaching and learning of difficult scientific concepts. Nevertheless, some problems associated to the use of analogies have been found. This paper aims at contributing to a better understanding of the use of analogies in science education, by means of a review of the state of art regarding this matter. It will take into account its contribution to science education as well as the challenges to further research

  3. Analog to Digital Conversion in Physical Measurements

    CERN Document Server

    Kapitaniak, T; Feudel, U; Grebogi, C

    1999-01-01

    There exist measuring devices where an analog input is converted into a digital output. Such converters can have a nonlinear internal dynamics. We show how measurements with such converting devices can be understood using concepts from symbolic dynamics. Our approach is based on a nonlinear one-to-one mapping between the analog input and the digital output of the device. We analyze the Bernoulli shift and the tent map which are realized in specific analog/digital converters. Furthermore, we discuss the sources of errors that are inevitable in physical realizations of such systems and suggest methods for error reduction.

  4. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  5. Fixed telephony evolution at CERN

    CERN Document Server

    CERN. Geneva

    2015-01-01

    The heart of CERN’s telephony infrastructure consists of the Alcatel IP-PBX that links CERN’s fixed line phones, Lync softphones and CERN’s GSM subscribers to low-cost local and international telephony services. The PABX infrastructure also supports the emergency “red telephones” in the LHC tunnel and provides vital services for the Fire and Rescue Service and the CERN Control Centre. Although still reliable, the Alcatel hardware is increasingly costly to maintain and looking increasingly outmoded in a market where open source solutions are increasingly dominant. After presenting an overview of the Alcatel PABX and the services it provides, including innovative solutions such as the Closed User Group for our mobile telephony services, we present a possible architecture for a software based system designed to meet tomorrow’s communication needs and describe how the introduction of open-source call routers based on the SIP protocol and Session Border Controllers (SBC) could foster the introduction...

  6. Fixed drug eruptions with modafinil

    Directory of Open Access Journals (Sweden)

    Loknath Ghoshal

    2015-01-01

    Full Text Available Modafinil is a psychostimulant drug, which has been approved by the US Food and Drug Administration for the treatment of narcolepsy associated excessive daytime sleepiness, sleep disorder related to shift work, and obstructive sleep apnea syndrome. However, presently it is being used as a lifestyle medicine; in India, it has been misused as an "over the counter" drug. Modafinil is known to have several cutaneous side effects. Fixed drug eruption (FDE is a distinctive drug induced reaction pattern characterized by recurrence of eruption at the same site of the skin or mucous membrane with repeated systemic administration. Only two case reports exist in the literature describing modafinil induced FDE until date. Here, we report two similar cases. The increasing use of this class of drug amongst the medical personnel might be posing a threat to the proper use and encouraging subsequent abuse. There might be a considerable population using these drugs unaware of the possible adverse effects. Authorities should be more alert regarding the sale and distribution of such medicines.

  7. Fixed-combination treatments for intraocular hypertension in Chinese patients - focus on bimatoprost-timolol.

    Science.gov (United States)

    Fang, Yuan; Ling, Zhihong; Sun, Xinghuai

    2015-01-01

    Glaucoma is a common eye disease that can lead to irreversible vision loss if left untreated. The early diagnosis and treatment of primary open-angle glaucoma is challenging, and visual impairment in Chinese glaucoma patients is a serious concern. Most of these patients need more than one topical antiglaucoma agent to control their intraocular pressures (IOPs). In the People's Republic of China, the daily cost of different glaucoma medication varies greatly, and the treatment habits differ throughout the country. Prostaglandin analogs (PGAs) are recommended as first-line monotherapy, because of their efficacy and low risk of systemic side effects. Fixed-combination drops, particularly PGA-based fixed combinations, have recently been developed and used in patients with progression or who have failed to achieve their target IOPs. Here, we reviewed the current literature on the use of bimatoprost-timolol fixed combination (BTFC) in the People's Republic of China. BTFC has achieved good efficacy and tolerability in Chinese clinical trials. In addition, BTFC is more cost effective compared with other fixed combinations available in the People's Republic of China. Fixed-combination drops may offer benefits, such as keeping the ocular surface healthy, convenience of administration, and improvement in long-term adherence and quality of life. Therefore, BTFC has great potential for the treatment of Chinese glaucoma patients. However, the long-term efficacy of BTFC, comparisons of BTFC with other fixed-combination drugs, and treatment adherence and persistence with treatment in Chinese patients are unknown and will require further study.

  8. Use of analogy in learning scientific concepts.

    Science.gov (United States)

    Donnelly, C M; McDaniel, M A

    1993-07-01

    Four experiments compared learning of scientific concepts as expressed in either traditional literal form or through an analogy. Comprehension of basic-level details and inferential implications was measured through multiple-choice testing. In Experiment 1, literal or analogical renditions were presented in textual form only. In Experiment 2, text was accompanied by a dynamic video. In Experiment 3, the video and text literal rendition was compared with a text-only analogical rendition. In Experiment 4, subjects read only about a familiar domain. Subjects consistently answered basic-level questions most accurately when concepts were expressed literally, but answered inferential questions most accurately when concepts were expressed analogically. Analysis of individual differences (Experiment 2) indicated that this interaction strongly characterized the conceptual learning of science novices. The results are discussed within the framework of schema induction.

  9. Identifying Solar Analogs in the Kepler Field

    Science.gov (United States)

    Buzasi, Derek L.; Lezcano, Andrew; Preston, Heather L.

    2014-06-01

    Since human beings live on a planet orbiting a G2 V star, to us perhaps the most intrinsically interesting category of stars about which planets have been discovered is solar analogs. While Kepler has observed more than 26000 targets which have effective temperatures within 100K of the Sun, many of these are not true solar analogs due to activity, surface gravity, metallicity, or other considerations. Here we combine ground-based measurements of effective temperature and metallicity with data on rotational periods and surface gravities derived from 16 quarters of Kepler observations to produce a near-complete sample of solar analogs in the Kepler field. We then compare the statistical distribution of stellar physical parameters, including activity level, for subsets of solar analogs consisting of KOIs and those with no detected exoplanets. Finally, we produce a list of potential solar twins in the Kepler field.

  10. Synthesis and Biological Activity of Philanthotoxin Analogs

    Institute of Scientific and Technical Information of China (English)

    Yong An ZHANG; Ke Zhong LIU; Deng Yuan WANG; Yu Zhu WANG; Liang Jian QU; Chang Jin ZHU

    2006-01-01

    The synthesis of four analogs of philanthotoxin is described. The preliminary bioassay showed that these compounds all had good insecticidal activities, and the compound 6a had the best killing effect.

  11. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  12. Fixed Export Costs and Export Behavior

    OpenAIRE

    Castro, Luis; Li, Ben; Keith E. Maskus; Xie, Yiqing

    2014-01-01

    This paper provides a direct test of how fixed export costs and productivity jointly determine firm-level export behavior. Using Chilean data, we construct indices of fixed export costs for each industry-region-year triplet and match them to domestic firms. Our empirical results show that firms facing higher fixed export costs are less likely to export, while those with higher productivity export more. These outcomes are the foundation of the widely-used sorting mechanism in the trade models ...

  13. Fixed-point-like theorems on subspaces

    Directory of Open Access Journals (Sweden)

    Bernard Cornet

    2004-08-01

    Full Text Available We prove a fixed-point-like theorem for multivalued mappings defined on the finite Cartesian product of Grassmannian manifolds and convex sets. Our result generalizes two different kinds of theorems: the fixed-point-like theorem by Hirsch et al. (1990 or Husseini et al. (1990 and the fixed-point theorem by Gale and Mas-Colell (1975 (which generalizes Kakutani's theorem (1941.

  14. Cross Service Fixed-Wing Cost Estimation

    Science.gov (United States)

    2016-05-17

    costs. The end product of this project will be a method for any service to estimate fixed-wing costs for sorties to use in mission cost estimation for...TRAC-M-TR-16-021 May 2016 Cross Service Fixed-Wing Cost Estimation TRADOC Analysis Center 700 Dyer Road Monterey, California 93943-0692 This study... Service Fixed-Wing Cost Estimation MAJ Jarrod S. Shingleton TRADOC Analysis Center 700 Dyer Road Monterey, California 93943-0692 DISTRIBUTION STATEMENT

  15. Analogy betwen dislocation creep and relativistic cosmology

    OpenAIRE

    J.A. Montemayor-Aldrete; J.D. Muñoz-Andrade; Mendoza-Allende, A.; Montemayor-Varela, A.

    2005-01-01

    A formal, physical analogy between plastic deformation, mainly dislocation creep, and Relativistic Cosmology is presented. The physical analogy between eight expressions for dislocation creep and Relativistic Cosmology have been obtained. By comparing the mathematical expressions and by using a physical analysis, two new equations have been obtained for dislocation creep. Also, four new expressions have been obtained for Relativistic Cosmology. From these four new equations, one may determine...

  16. The analogy between stereo depth and brightness.

    Science.gov (United States)

    Brookes, A; Stevens, K A

    1989-01-01

    Apparent depth in stereograms exhibits various simultaneous-contrast and induction effects analogous to those reported in the luminance domain. This behavior suggests that stereo depth, like brightness, is reconstructed, ie recovered from higher-order spatial derivatives or differences of the original signal. The extent to which depth is analogous to brightness is examined. There are similarities in terms of contrast effects but dissimilarities in terms of the lateral inhibition effects traditionally attributed to underlying spatial-differentiation operators.

  17. AMiBA Wideband Analog Correlator

    CERN Document Server

    Li, Chao-Te; Wilson, Warwick; Lin, Kai-Yang; Chen, Ming-Tang; Ho, P T P; Chen, Chung-Cheng; Han, Chih-Chiang; Oshiro, Peter; Martin-Cocher, Pierre; Chang, Chia-Hao; Chang, Shu-Hao; Altamirano, Pablo; Jiang, Homin; Chiueh, Tzi-Dar; Lien, Chun-Hsien; Wang, Huei; Wei, Ray-Ming; Yang, Chia-Hsiang; Peterson, Jeffrey B; Chang, Su-Wei; Huang, Yau-De; Hwang, Yuh-Jing; Kesteven, Michael; Koch, Patrick; Liu, Guo-Chin; Nishioka, Hiroaki; Umetsu, Keiichi; Wei, Tashun; Wu, Jiun-Huei Proty

    2010-01-01

    A wideband analog correlator has been constructed for the Yuan-Tseh Lee Array for Microwave Background Anisotropy. Lag correlators using analog multipliers provide large bandwidth and moderate frequency resolution. Broadband IF distribution, backend signal processing and control are described. Operating conditions for optimum sensitivity and linearity are discussed. From observations, a large effective bandwidth of around 10 GHz has been shown to provide sufficient sensitivity for detecting cosmic microwave background variations.

  18. Protein Structure Prediction with Visuospatial Analogy

    Science.gov (United States)

    Davies, Jim; Glasgow, Janice; Kuo, Tony

    We show that visuospatial representations and reasoning techniques can be used as a similarity metric for analogical protein structure prediction. Our system retrieves pairs of α-helices based on contact map similarity, then transfers and adapts the structure information to an unknown helix pair, showing that similar protein contact maps predict similar 3D protein structure. The success of this method provides support for the notion that changing representations can enable similarity metrics in analogy.

  19. Analog baseband circuits for sensor systems

    OpenAIRE

    2008-01-01

    This thesis is composed of six publications and an overview of the research topic, which also summarizes the work. The research presented in this thesis focuses on research into analog baseband circuits for sensor systems. The research is divided into three different topics: the integration of analog baseband circuits into a radio receiver for sensor applications; the integration of an ΔΣ modulator A/D converter into a GSM/WCDMA radio receiver for mobile phones, and the integration of algorit...

  20. Automated Integrated Analog Filter Design Issues

    OpenAIRE

    2015-01-01

    An analysis of modern automated integrated analog circuits design methods and their use in integrated filter design is done. Current modern analog circuits automated tools are based on optimization algorithms and/or new circuit generation methods. Most automated integrated filter design methods are only suited to gmC and switched current filter topologies. Here, an algorithm for an active RC integrated filter design is proposed, that can be used in automated filter designs. The algorithm is t...

  1. Synthetic heparin-binding growth factor analogs

    Science.gov (United States)

    Pena, Louis A.; Zamora, Paul; Lin, Xinhua; Glass, John D.

    2007-01-23

    The invention provides synthetic heparin-binding growth factor analogs having at least one peptide chain that binds a heparin-binding growth factor receptor, covalently bound to a hydrophobic linker, which is in turn covalently bound to a non-signaling peptide that includes a heparin-binding domain. The synthetic heparin-binding growth factor analogs are useful as soluble biologics or as surface coatings for medical devices.

  2. Learning Domain Theories via Analogical Transfer

    Science.gov (United States)

    2007-01-01

    Shive & Weber 1982). In the linear kinematics section of the textbook used for this study ( Giancoli 1991), there are eight worked out examples...is analogous to the dynamics of linear motion” (p. 197, Giancoli 1991). This is common practice in textbooks, and analogies between domains form the...and Worked Solution All problems and worked solutions used in this work were taken from the same physics textbook ( Giancoli 1991). Problems are

  3. Analogical Learning and Automated Rule Constructions

    Institute of Scientific and Technical Information of China (English)

    周哈阳

    1991-01-01

    This paper describes some experiments of analogical learning and automated rule construction.The present investigation focuses on knowledge acquisition,learning by analyogy,and knowledge retention.The developed system initially learns from scratch,gradually acquires knowledge from its environment through trial-and-error interaction,incrementally augments its knowledge base,and analogically solves new tasks in a more efficient and direct manner.

  4. cuLGT: Lattice Gauge Fixing on GPUs

    CERN Document Server

    Vogt, Hannes

    2014-01-01

    We adopt CUDA-capable Graphic Processing Units (GPUs) for Landau, Coulomb and maximally Abelian gauge fixing in 3+1 dimensional SU(3) and SU(2) lattice gauge field theories. A combination of simulated annealing and overrelaxation is used to aim for the global maximum of the gauge functional. We use a fine grained degree of parallelism to achieve the maximum performance: instead of the common 1 thread per site strategy we use 4 or 8 threads per lattice site. Here, we report on an improved version of our publicly available code (www.cuLGT.com and github.com/culgt) which again increases performance and is much easier to include in existing code. On the GeForce GTX 580 we achieve up to 470 GFlops (utilizing 80% of the theoretical peak bandwidth) for the Landau overrelaxation code.

  5. Classification system adopted for fixed cutter bits

    Energy Technology Data Exchange (ETDEWEB)

    Winters, W.J.; Doiron, H.H.

    1988-01-01

    The drilling industry has begun adopting the 1987 International Association of Drilling Contractors' (IADC) method for classifying fixed cutter drill bits. By studying the classification codes on bit records and properly applying the new IADC fixed cutter dull grading system to recently run bits, the end-user should be able to improve the selection and usage of fixed cutter bits. Several users are developing databases for fixed cutter bits in an effort to relate field performance to some of the more prominent bit design characteristics.

  6. Fixed point theorems in spaces and -trees

    Directory of Open Access Journals (Sweden)

    Kirk WA

    2004-01-01

    Full Text Available We show that if is a bounded open set in a complete space , and if is nonexpansive, then always has a fixed point if there exists such that for all . It is also shown that if is a geodesically bounded closed convex subset of a complete -tree with , and if is a continuous mapping for which for some and all , then has a fixed point. It is also noted that a geodesically bounded complete -tree has the fixed point property for continuous mappings. These latter results are used to obtain variants of the classical fixed edge theorem in graph theory.

  7. Fixed Simplex Property for Retractable Complexes

    Directory of Open Access Journals (Sweden)

    Zapart Anna

    2010-01-01

    Full Text Available Abstract Retractable complexes are defined in this paper. It is proved that they have the fixed simplex property for simplicial maps. This implies the theorem of Wallace and the theorem of Rival and Nowakowski for finite trees: every simplicial map transforming vertices of a tree into itself has a fixed vertex or a fixed edge. This also implies the Hell and Nešetřil theorem: any endomorphism of a dismantlable graph fixes some clique. Properties of recursively contractible complexes are examined.

  8. Development of analogical problem-solving skill.

    Science.gov (United States)

    Holyoak, K J; Junn, E N; Billman, D O

    1984-12-01

    3 experiments were performed to assess children's ability to solve a problem by analogy to a superficially dissimilar situation. Preschoolers and fifth and sixth graders were asked to solve a problem that allowed multiple solutions. Some subjects were first read a story that included an analogous problem and its solution. When the mapping between the relations involved in the corresponding solutions was relatively simple, and the corresponding instruments were perceptually and functionally similar, even preschoolers were able to use the analogy to derive a solution to the transfer problem (Experiment 1). Furthermore, salient similarity of the instruments was neither sufficient (Experiment 2) nor necessary (Experiment 3) for success by preschool subjects. When the story analog mapped well onto the transfer problem, 4-year-olds were often able to generate a solution that required transformation of an object with little perceptual or semantic similarity to the instrument used in the base analog (Experiment 3). The older children used analogies in a manner qualitatively similar to that observed in comparable studies with adults (Experiment 1), whereas the younger children exhibited different limitations.

  9. Analog modelling of obduction processes

    Science.gov (United States)

    Agard, P.; Zuo, X.; Funiciello, F.; Bellahsen, N.; Faccenna, C.; Savva, D.

    2012-04-01

    Obduction corresponds to one of plate tectonics oddities, whereby dense, oceanic rocks (ophiolites) are presumably 'thrust' on top of light, continental ones, as for the short-lived, almost synchronous Peri-Arabic obduction (which took place along thousands of km from Turkey to Oman in c. 5-10 Ma). Analog modelling experiments were performed to study the mechanisms of obduction initiation and test various triggering hypotheses (i.e., plate acceleration, slab hitting the 660 km discontinuity, ridge subduction; Agard et al., 2007). The experimental setup comprises (1) an upper mantle, modelled as a low-viscosity transparent Newtonian glucose syrup filling a rigid Plexiglas tank and (2) high-viscosity silicone plates (Rhodrosil Gomme with PDMS iron fillers to reproduce densities of continental or oceanic plates), located at the centre of the tank above the syrup to simulate the subducting and the overriding plates - and avoid friction on the sides of the tank. Convergence is simulated by pushing on a piston at one end of the model with velocities comparable to those of plate tectonics (i.e., in the range 1-10 cm/yr). The reference set-up includes, from one end to the other (~60 cm): (i) the piston, (ii) a continental margin containing a transition zone to the adjacent oceanic plate, (iii) a weakness zone with variable resistance and dip (W), (iv) an oceanic plate - with or without a spreading ridge, (v) a subduction zone (S) dipping away from the piston and (vi) an upper, active continental margin, below which the oceanic plate is being subducted at the start of the experiment (as is known to have been the case in Oman). Several configurations were tested and over thirty different parametric tests were performed. Special emphasis was placed on comparing different types of weakness zone (W) and the extent of mechanical coupling across them, particularly when plates were accelerated. Displacements, together with along-strike and across-strike internal deformation in all

  10. Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs

    CERN Document Server

    Italiano, Giuseppe F

    2010-01-01

    In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...

  11. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  12. Ciprofloxacin induced fixed drug eruption

    Directory of Open Access Journals (Sweden)

    M. Ravishankar

    2014-12-01

    Full Text Available Fixed drug eruption (FDE is a clinical entity occurring in the same site or sites each time the drug is administered. Acute lesions appear as sharply marginated erythematous plaques, which are usually found on lips, genitalia, abdomen, and legs. The eruptions usually occur within hours of administration of the offending agent and resolves spontaneously without scarring after few weeks of onset. Most common drugs causing FDE are sulfonamides, tetracyclines, salicylates, barbiturates, doxycycline, fluconazole, clarithromycin, etc. Ciprofloxacin, a widely used fluoroquinolone antimicrobial, induces cutaneous adverse drug reactions (ADRs in about 1-2% of treated patients. Urticaria, angioedema, maculopapular exanthems, and photosensitivity are the most frequently documented cutaneous adverse reactions. In this case report, the patient soon after taking ciprofloxacin tablets, developed itching in the lips, palms and in scrotal region. On continuing the treatment, the next day he developed fluid filled lesions over palm, knuckle, and hyperpigmentation. He gives a history of severe itching and rashes in scrotal region. He gives a history of similar complaints in the previous month after taking ciprofloxacin medication. There was no history of intake of any other medication. On examination, bullous lesions and pustules in finger webs, hyperpigmentation on knuckles, and scrotal erosions were seen. In the present case report, the patient presented with FDE immediately after oral administration of ciprofloxacin and got completely cured after stopping the drug and taking adequate treatment. According to the Naranjo's ADR probability scale (score=8, this ADR is categorized as a and ldquo;probable and rdquo; reaction to the drug. [Int J Basic Clin Pharmacol 2014; 3(6.000: 1096-1097

  13. Factors influencing bonding fixed restorations

    Directory of Open Access Journals (Sweden)

    Medić Vesna

    2008-01-01

    Full Text Available INTRODUCTION Crown displacement often occurs because the features of tooth preparations do not counteract the forces directed against restorations. OBJECTIVE The purpose of this study was to evaluate the effect of preparation designs on retention and resistance of fixed restorations. METHOD The study was performed on 64 differently sized stainless steel dies. Also, caps which were used for evaluated retention were made of stainless steel for each die. After cementing the caps on experimental dies, measuring of necessary tensile forces to separate cemented caps from dies was done. Caps, which were made of a silver-palladium alloy with a slope of 60° to the longitudinal axis formed on the occlusal surface, were used for evaluating resistance. A sudden drop in load pressure recorded by the test machine indicated failure for that cap. RESULTS A significant difference was found between the tensile force required to remove the caps from the dies with different length (p<0.05 and different taper (p<0.01. The greatest retentive strengths (2579.2 N and 2989.8 N were noticed in experimental dies with the greatest length and smallest taper. No statistically significant (p>0.05 differences were found between tensile loads for caps cemented on dies with different diameter. Although there was an apparent slight increase in resistance values for caps on dies with smaller tapers, the increase in resistance for those preparation designs was not statistically significant. There was a significant difference among the resistance values for caps on dies with different length (p<0.01 and diameter (p<0.05. CONCLUSION In the light of the results obtained, it could be reasonably concluded that retention and resistance of the restoration is in inverse proportion to convergence angle of the prepared teeth. But, at a constant convergence angle, retention and resistance increase with rising length and diameter.

  14. The inverse maximum dynamic flow problem

    Institute of Scientific and Technical Information of China (English)

    BAGHERIAN; Mehri

    2010-01-01

    We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.

  15. Approximate Equilibrium Problems and Fixed Points

    Directory of Open Access Journals (Sweden)

    H. Mazaheri

    2013-01-01

    Full Text Available We find a common element of the set of fixed points of a map and the set of solutions of an approximate equilibrium problem in a Hilbert space. Then, we show that one of the sequences weakly converges. Also we obtain some theorems about equilibrium problems and fixed points.

  16. Approximate fixed point of Reich operator

    Directory of Open Access Journals (Sweden)

    M. Saha

    2013-01-01

    Full Text Available In the present paper, we study the existence of approximate fixed pointfor Reich operator together with the property that the ε-fixed points are concentrated in a set with the diameter tends to zero if ε $to$ > 0.

  17. On computing fixed points for generalized sandpiles

    OpenAIRE

    Formenti, Enrico; Masson, Benoît

    2004-01-01

    Presented at DMCS 2004 (Turku, FINLAND). Long version with proofs published in International Journal of Unconventional Computing, 2006; International audience; We prove fixed points results for sandpiles starting with arbitrary initial conditions. We give an effective algorithm for computing such fixed points, and we refine it in the particular case of SPM.

  18. Magnetic Fixed Points and Emergent Supersymmetry

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mojaza, Matin; Pica, Claudio;

    2013-01-01

    We establish in perturbation theory the existence of fixed points along the renormalization group flow for QCD with an adjoint Weyl fermion and scalar matter reminiscent of magnetic duals of QCD [1-3]. We classify the fixed points by analyzing their basin of attraction. We discover that among the...

  19. Fixed-Response Questions with a Difference.

    Science.gov (United States)

    Johnstone, Alex H.; Ambusaidi, Abdullah

    2002-01-01

    Offers three types of fixed-response questions that are designed to overcome drawbacks appearing in the conventional forms of fixed-response questions such as not allowing the examiner to investigate reasoning, background, or prevent guessing. (Contains 14 references.) (Author/YDS)

  20. Fixed Point Curve for Weakly Inward Contractions and Approximate Fixed Point Property

    Institute of Scientific and Technical Information of China (English)

    P. Riyas; K. T. Ravindran

    2013-01-01

    In this paper, we discuss the concept of fixed point curve for linear interpo-lations of weakly inward contractions and establish necessary condition for a nonex-pansive mapping to have approximate fixed point property.

  1. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  2. Magnetic Fixed Points and Emergent Supersymmetry

    CERN Document Server

    Antipin, Oleg; Pica, Claudio; Sannino, Francesco

    2011-01-01

    We establish the existence of fixed points for certain gauge theories candidate to be magnetic duals of QCD with one adjoint Weyl fermion. In the perturbative regime of the magnetic theory the existence of a very large number of fixed points is unveiled. We classify them by analyzing their basin of attraction. The existence of several nonsupersymmetric fixed points for the magnetic gauge theory lends further support towards the existence of gauge-gauge duality beyond supersymmetry. We also discover that among these very many fixed points there are supersymmetric ones emerging from a generic nonsupersymmetric renormalization group flow. We therefore conclude that supersymmetry naturally emerges as a fixed point theory from a nonsupersymmetric Lagrangian without the need for fine-tuning of the bare couplings. Our results suggest that supersymmetry can be viewed as an emergent phenomenon in field theory. In particular there should be no need for fine-tuning the bare couplings when performing Lattice simulations ...

  3. Fixed points of occasionally weakly biased mappings

    Directory of Open Access Journals (Sweden)

    Y. Mahendra Singh, M. R. Singh

    2012-09-01

    Full Text Available Common fixed point results due to Pant et al. [Pant et al., Weak reciprocal continuity and fixed point theorems, Ann Univ Ferrara, 57(1, 181-190 (2011] are extended to a class of non commuting operators called occasionally weakly biased pair[ N. Hussain, M. A. Khamsi A. Latif, Commonfixed points for JH-operators and occasionally weakly biased pairs under relaxed conditions, Nonlinear Analysis, 74, 2133-2140 (2011]. We also provideillustrative examples to justify the improvements. Abstract. Common fixed point results due to Pant et al. [Pant et al., Weakreciprocal continuity and fixed point theorems, Ann Univ Ferrara, 57(1, 181-190 (2011] are extended to a class of non commuting operators called occasionally weakly biased pair[ N. Hussain, M. A. Khamsi A. Latif, Common fixed points for JH-operators and occasionally weakly biased pairs under relaxed conditions, Nonlinear Analysis, 74, 2133-2140 (2011]. We also provide illustrative examples to justify the improvements.

  4. Comparison of Nootropic and Neuroprotective Features of Aryl-Substituted Analogs of Gamma-Aminobutyric Acid.

    Science.gov (United States)

    Tyurenkov, I N; Borodkina, L E; Bagmetova, V V; Berestovitskaya, V M; Vasil'eva, O S

    2016-02-01

    GABA analogs containing phenyl (phenibut) or para-chlorophenyl (baclofen) substituents demonstrated nootropic activity in a dose of 20 mg/kg: they improved passive avoidance conditioning, decelerated its natural extinction, and exerted antiamnestic effect on the models of amnesia provoked by scopolamine or electroshock. Tolyl-containing GABA analog (tolibut, 20 mg/kg) exhibited antiamnestic activity only on the model of electroshock-induced amnesia. Baclofen and, to a lesser extent, tolibut alleviated seizures provoked by electroshock, i.e. both agents exerted anticonvulsant effect. All examined GABA aryl derivatives demonstrated neuroprotective properties on the maximum electroshock model: they shortened the duration of coma and shortened the period of spontaneous motor activity recovery. In addition, these agents decreased the severity of passive avoidance amnesia and behavioral deficit in the open field test in rats exposed to electroshock. The greatest neuroprotective properties were exhibited by phenyl-containing GABA analog phenibut.

  5. Competition for phosphorus between the nitrogen-fixing cyanobacteria Anabaena and Aphanizomenon

    NARCIS (Netherlands)

    DeNobel, WT; Snoep, JL; Mur, LR

    1997-01-01

    The influence of Na fixation on the P-limited growth of two strains of Anabaena and Aphanizomenon was investigated using continuous cultures. Under N-2-fixing conditions Anabaena had a higher maximum growth rate, a greater affinity for P, a higher yield on P and a higher N-2 fixation activity than A

  6. 13 CFR 120.213 - What fixed interest rates may a Lender charge?

    Science.gov (United States)

    2010-01-01

    ... Lender charge? 120.213 Section 120.213 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION... have a reasonable fixed interest rate. SBA periodically publishes the maximum allowable rate in the... government determines the interest rate on direct loans. SBA publishes the rate periodically in the...

  7. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  8. Penalized maximum likelihood estimation for generalized linear point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....

  9. Anti-Plasmodium activity of ceramide analogs

    Directory of Open Access Journals (Sweden)

    Gatt Shimon

    2004-12-01

    Full Text Available Abstract Background Sphingolipids are key molecules regulating many essential functions in eukaryotic cells and ceramide plays a central role in sphingolipid metabolism. A sphingolipid metabolism occurs in the intraerythrocytic stages of Plasmodium falciparum and is associated with essential biological processes. It constitutes an attractive and potential target for the development of new antimalarial drugs. Methods The anti-Plasmodium activity of a series of ceramide analogs containing different linkages (amide, methylene or thiourea linkages between the fatty acid part of ceramide and the sphingoid core was investigated in culture and compared to the sphingolipid analog PPMP (d,1-threo-1-phenyl-2-palmitoylamino-3-morpholino-1-propanol. This analog is known to inhibit the parasite sphingomyelin synthase activity and block parasite development by preventing the formation of the tubovesicular network that extends from the parasitophorous vacuole to the red cell membrane and delivers essential extracellular nutrients to the parasite. Results Analogs containing methylene linkage showed a considerably higher anti-Plasmodium activity (IC50 in the low nanomolar range than PPMP and their counterparts with a natural amide linkage (IC50 in the micromolar range. The methylene analogs blocked irreversibly P. falciparum development leading to parasite eradication in contrast to PPMP whose effect is cytostatic. A high sensitivity of action towards the parasite was observed when compared to their effect on the human MRC-5 cell growth. The toxicity towards parasites did not correlate with the inhibition by methylene analogs of the parasite sphingomyelin synthase activity and the tubovesicular network formation, indicating that this enzyme is not their primary target. Conclusions It has been shown that ceramide analogs were potent inhibitors of P. falciparum growth in culture. Interestingly, the nature of the linkage between the fatty acid part and the

  10. The future of vitamin D analogs

    Directory of Open Access Journals (Sweden)

    Carlien eLeyssens

    2014-04-01

    Full Text Available The active form of vitamin D3, 1,25-dihydroxyvitamin D3, is a major regulator of bone and calcium homeostasis. In addition, this hormone also inhibits the proliferation and stimulates the differentiation of normal as well as malignant cells. Supraphysiological doses of 1,25-dihydroxyvitamin D3 are required to reduce cancer cell proliferation. However, these doses will lead in vivo to calcemic side effects such as hypercalcemia and hypercalciuria. During the last 25 years, many structural analogs of 1,25-dihydroxyvitamin D3 have been synthesized by the introduction of chemical modifications in the A-ring, central CD-ring region or side chain of 1,25-dihydroxyvitamin D3 in the hope to find molecules with a clear dissociation between the beneficial antiproliferative effects and adverse calcemic side effects. One example of such an analog with a good dissociation ratio is calcipotriol (DaivonexR, which is clinically used to treat the hyperproliferative skin disease psoriasis. Other vitamin D analogs were clinically approved for the treatment of osteoporosis or secondary hyperparathyroidism. No vitamin D analog is currently used in the clinic for the treatment of cancer although several analogs have been shown to be potent drugs in animal models of cancer. Omics studies as well as in vitro cell biological experiments unraveled basic mechanisms involved in the antineoplastic effects of vitamin D and its analogs. 1,25-dihydroxyvitamin D3 and analogs act in a cell type- and tissue-specific manner. Moreover, a blockade in the transition of the G0/1 towards S phase of the cell cycle, induction of apoptosis, inhibition of migration and invasion of tumor cells together with effects on angiogenesis and inflammation have been implicated in the pleiotropic effects of 1,25-dihydroxyvitamin D3 and its analogs. In this review we will give an overview of the action of vitamin D analogs in tumor cells and look forward how these compounds could be introduced in the

  11. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  12. Analogy, higher order thinking, and education.

    Science.gov (United States)

    Richland, Lindsey Engle; Simms, Nina

    2015-01-01

    Analogical reasoning, the ability to understand phenomena as systems of structured relationships that can be aligned, compared, and mapped together, plays a fundamental role in the technology rich, increasingly globalized educational climate of the 21st century. Flexible, conceptual thinking is prioritized in this view of education, and schools are emphasizing 'higher order thinking', rather than memorization of a cannon of key topics. The lack of a cognitively grounded definition for higher order thinking, however, has led to a field of research and practice with little coherence across domains or connection to the large body of cognitive science research on thinking. We review literature on analogy and disciplinary higher order thinking to propose that relational reasoning can be productively considered the cognitive underpinning of higher order thinking. We highlight the utility of this framework for developing insights into practice through a review of mathematics, science, and history educational contexts. In these disciplines, analogy is essential to developing expert-like disciplinary knowledge in which concepts are understood to be systems of relationships that can be connected and flexibly manipulated. At the same time, analogies in education require explicit support to ensure that learners notice the relevance of relational thinking, have adequate processing resources available to mentally hold and manipulate relations, and are able to recognize both the similarities and differences when drawing analogies between systems of relationships.

  13. Neurotoxic Alkaloids: Saxitoxin and Its Analogs

    Directory of Open Access Journals (Sweden)

    Troco K. Mihali

    2010-07-01

    Full Text Available Saxitoxin (STX and its 57 analogs are a broad group of natural neurotoxic alkaloids, commonly known as the paralytic shellfish toxins (PSTs. PSTs are the causative agents of paralytic shellfish poisoning (PSP and are mostly associated with marine dinoflagellates (eukaryotes and freshwater cyanobacteria (prokaryotes, which form extensive blooms around the world. PST producing dinoflagellates belong to the genera Alexandrium, Gymnodinium and Pyrodinium whilst production has been identified in several cyanobacterial genera including Anabaena, Cylindrospermopsis, Aphanizomenon Planktothrix and Lyngbya. STX and its analogs can be structurally classified into several classes such as non-sulfated, mono-sulfated, di-sulfated, decarbamoylated and the recently discovered hydrophobic analogs—each with varying levels of toxicity. Biotransformation of the PSTs into other PST analogs has been identified within marine invertebrates, humans and bacteria. An improved understanding of PST transformation into less toxic analogs and degradation, both chemically or enzymatically, will be important for the development of methods for the detoxification of contaminated water supplies and of shellfish destined for consumption. Some PSTs also have demonstrated pharmaceutical potential as a long-term anesthetic in the treatment of anal fissures and for chronic tension-type headache. The recent elucidation of the saxitoxin biosynthetic gene cluster in cyanobacteria and the identification of new PST analogs will present opportunities to further explore the pharmaceutical potential of these intriguing alkaloids.

  14. Magnetic activity of seismic solar analogs

    CERN Document Server

    Salabert, D

    2016-01-01

    We present our latest results on the solar-stellar connection by studying 18 solar analogs that we identified among the Kepler seismic sample (Salabert et al., 2016a). We measured their magnetic activity properties using observations collected by the Kepler satellite and the ground-based, high-resolution Hermes spectrograph. The photospheric (Sph) and chromospheric (S) magnetic activity proxies of these seismic solar analogs are compared in relation to solar activity. We show that the activity of the Sun is actually comparable to the activity of the seismic solar analogs. Furthermore, we report on the discovery of temporal variability in the acoustic frequencies of the young (1 Gyr-old) solar analog KIC10644253 with a modulation of about 1.5 years, which agrees with the derived photospheric activity (Salabert et al., 2016b). It could actually be the signature of the short-period modulation, or quasi-biennal oscillation, of its magnetic activity as observed in the Sun and the 1-Gyr-old solar analog HD30495. In...

  15. NaturAnalogs for the Unsaturated Zone

    Energy Technology Data Exchange (ETDEWEB)

    A. Simmons; A. Unger; M. Murrell

    2000-03-08

    The purpose of this Analysis/Model Report (AMR) is to document natural and anthropogenic (human-induced) analog sites and processes that are applicable to flow and transport processes expected to occur at the potential Yucca Mountain repository in order to build increased confidence in modeling processes of Unsaturated Zone (UZ) flow and transport. This AMR was prepared in accordance with ''AMR Development Plan for U0135, Natural Analogs for the UZ'' (CRWMS 1999a). Knowledge from analog sites and processes is used as corroborating information to test and build confidence in flow and transport models of Yucca Mountain, Nevada. This AMR supports the Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR) and the Yucca Mountain Site Description. The objectives of this AMR are to test and build confidence in the representation of UZ processes in numerical models utilized in the UZ Flow and Transport Model. This is accomplished by: (1) applying data from Boxy Canyon, Idaho in simulations of UZ flow using the same methodologies incorporated in the Yucca Mountain UZ Flow and Transport Model to assess the fracture-matrix interaction conceptual model; (2) Providing a preliminary basis for analysis of radionuclide transport at Pena Blanca, Mexico as an analog of radionuclide transport at Yucca Mountain; and (3) Synthesizing existing information from natural analog studies to provide corroborating evidence for representation of ambient and thermally coupled UZ flow and transport processes in the UZ Model.

  16. Not All Analogies Are Created Equal: Associative and Categorical Analogy Processing following Brain Damage

    Science.gov (United States)

    Schmidt, Gwenda L.; Cardillo, Eileen R.; Kranjec, Alexander; Lehet, Matthew; Widick, Page; Chatterjee, Anjan

    2012-01-01

    Current research on analogy processing assumes that different conceptual relations are treated similarly. However, just as words and concepts are related in distinct ways, different kinds of analogies may employ distinct types of relationships. An important distinction in how words are related is the difference between associative (dog-bone) and…

  17. The maximum rotation of a galactic disc

    NARCIS (Netherlands)

    Bottema, R

    1997-01-01

    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously

  18. 20 CFR 229.48 - Family maximum.

    Science.gov (United States)

    2010-04-01

    ... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...

  19. Generalised maximum entropy and heterogeneous technologies

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    1999-01-01

    Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam

  20. Analogy-Enhanced Instruction: Effects on Reasoning Skills in Science

    Science.gov (United States)

    Remigio, Krisette B.; Yangco, Rosanelia T.; Espinosa, Allen A.

    2014-01-01

    The study examined the reasoning skills of first year high school students after learning general science concepts through analogies. Two intact heterogeneous sections were randomly assigned to Analogy-Enhanced Instruction (AEI) group and Non Analogy-Enhanced (NAEI) group. Various analogies were incorporated in the lessons of the AEI group for…

  1. Value and Limitations of Analogs in Teaching Mathematics.

    Science.gov (United States)

    Halford, Graeme S.; Boulton-Lewis, Gillian M.

    Analogical reasoning is frequently used in acquisition of mathematical concepts. Concrete representations used to teach mathematics are essentially analogs of mathematical concepts, and it is argued that analogies enter into mathematical concept acquisition in numerous other ways as well. According to Gentner's theory, analogies entail a…

  2. COMPUTER CONTROL OF AN ANALOG VOCAL TRACT

    Science.gov (United States)

    electrically controlled by a nasal coupling signal, and represent the action of the velum . The remaining sections are fixed as they do not vary significantly during the production of speech sounds. (Author)

  3. Duality of Maximum Entropy and Minimum Divergence

    Directory of Open Access Journals (Sweden)

    Shinto Eguchi

    2014-06-01

    Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.

  4. Analog Forecasting with Dynamics-Adapted Kernels

    CERN Document Server

    Zhao, Zhizhen

    2014-01-01

    Analog forecasting is a non-parametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from state-space reconstruction for dynamical systems and kernel methods developed in harmonic analysis and machine learning. The first improvement is to augment the dimension of the initial data using Takens' delay-coordinate maps to recover information in the initial data lost through partial observations. Then, instead of using Euclidean distances between the states, weighted ensembles of analogs are constructed according to similarity kernels in delay-coordinate space, featuring an explicit dependence on the dynamical vector field generating the data. The eigenvalues and eigenfunctions ...

  5. On Lovelock analogs of the Riemann tensor

    Energy Technology Data Exchange (ETDEWEB)

    Camanho, Xian O. [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik, Golm (Germany); Dadhich, Naresh [Jamia Millia Islamia, Centre for Theoretical Physics, New Delhi (India); Inter-University Centre for Astronomy and Astrophysics, Pune (India)

    2016-03-15

    It is possible to define an analog of the Riemann tensor for Nth order Lovelock gravity, its characterizing property being that the trace of its Bianchi derivative yields the corresponding analog of the Einstein tensor. Interestingly there exist two parallel but distinct such analogs and the main purpose of this note is to reconcile both formulations. In addition we will introduce a simple tensor identity and use it to show that any pure Lovelock vacuum in odd d = 2N + 1 dimensions is Lovelock flat, i.e. any vacuum solution of the theory has vanishing Lovelock-Riemann tensor. Further, in the presence of cosmological constant it is the Lovelock-Weyl tensor that vanishes. (orig.)

  6. Formal analogies in physics teacher education

    DEFF Research Database (Denmark)

    Avelar Sotomaior Karam, Ricardo; Ricardo, Elio

    2012-01-01

    the relevance of the subject, formal analogies are rarely systematically approached in physics education. In order to discuss this issue with pre-service physics teachers, we planned a lecture and designed a questionnaire with the goal of encouraging them to think about some “coincidences” in well known......Reasoning by similarities, especially the ones associated with formal aspects, is one of the most valuable sources for the development of physical theories. The essential role of formal analogies in science can be highlighted by the fact that several equations for different physical situations have...... the exact same appearance. Coulomb’s law’s similarity with Newton’s, Maxwell’s application of fluid theory to electromagnetism and Hamilton’s optical mechanical analogy are some among many other examples. These cases illustrate the power of mathematics in providing unifying structures for physics. Despite...

  7. Analog Module Placement Design Using Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper presents a novel genetic algorithm for analog module placement based on ageneralization of the two-dimensional bin packing problem. The genetic encoding and operators assure that allproblem constraints are always satisfied. Thus the potential problems of adding penalty terms to the costfunction are eliminated so that the search configuration space is drastically decreased. The dedicated costfunction is based on the special requirements of analog integrated circuits. A fractional factorial experimentwas conducted using an orthogonal array to study the algorithm parameters. A meta GA was applied todetermine the optimal parameter values. The algorithm was tested with several local benchmark circuits. Theexperimental results show that the algorithm has better performance than the simulated annealing approachwith satisfactory results comparable to manual placement. This study demonstrates the effectiveness of thegenetic algorithm in the analog module placement problem. The algorithm has been successfully used in alayout synthesis tool.

  8. Electronic devices for analog signal processing

    CERN Document Server

    Rybin, Yu K

    2012-01-01

    Electronic Devices for Analog Signal Processing is intended for engineers and post graduates and considers electronic devices applied to process analog signals in instrument making, automation, measurements, and other branches of technology. They perform various transformations of electrical signals: scaling, integration, logarithming, etc. The need in their deeper study is caused, on the one hand, by the extension of the forms of the input signal and increasing accuracy and performance of such devices, and on the other hand, new devices constantly emerge and are already widely used in practice, but no information about them are written in books on electronics. The basic approach of presenting the material in Electronic Devices for Analog Signal Processing can be formulated as follows: the study with help from self-education. While divided into seven chapters, each chapter contains theoretical material, examples of practical problems, questions and tests. The most difficult questions are marked by a diamon...

  9. Stimulus properties of fixed-interval responses.

    Science.gov (United States)

    Buchman, I B; Zeiler, M D

    1975-11-01

    Responses in the first component of a chained schedule produced a change to the terminal component according to a fixed-interval schedule. The number of responses emitted in the fixed interval determined whether a variable-interval schedule of food presentation or extinction prevailed in the terminal component. In one condition, the variable-interval schedule was in effect only if the number of responses during the fixed interval was less than that specified; in another condition, the number of responses had to exceed that specified. The number of responses emitted in the fixed interval did not shift markedly in the direction required for food presentation. Instead, responding often tended to change in the opposite direction. Such an effect indicated that differential food presentation did not modify the reference behavior in accord with the requirement, but it was consistent with other data on fixed-interval schedule performance. Behavior in the terminal component, however, did reveal sensitivity to the relation between total responses emitted in the fixed interval and the availability of food. Response rate in the terminal component was a function of the proximity of the response number emitted in the fixed interval to that required for food presentation. Thus, response number served as a discriminative stimulus controlling subsequent performance.

  10. Total Stability Properties Based on Fixed Point Theory for a Class of Hybrid Dynamic Systems

    Directory of Open Access Journals (Sweden)

    M. De la Sen

    2009-01-01

    Full Text Available Robust stability results for nominally linear hybrid systems are obtained from total stability theorems for purely continuous-time and discrete-time systems by using the powerful tool of fixed point theory. The class of hybrid systems dealt consists, in general, of coupled continuous-time and digital systems subject to state perturbations whose nominal (i.e., unperturbed parts are linear and, in general, time-varying. The obtained sufficient conditions on robust stability under a wide class of harmless perturbations are dependent on the values of the parameters defining the over-bounding functions of those perturbations. The weakness of the coupling dynamics in terms of norm among the analog and digital substates of the whole dynamic system guarantees the total stability provided that the corresponding uncoupled nominal subsystems are both exponentially stable. Fixed point stability theory is used for the proofs of stability. A generalization of that result is given for the case that sampling is not uniform. The boundedness of the state-trajectory solution at sampling instants guarantees the global boundedness of the solutions for all time. The existence of a fixed point for the sampled state-trajectory solution at sampling instants guarantees the existence of a fixed point of an extended auxiliary discrete system and the existence of a global asymptotic attractor of the solutions which is either a fixed point or a limit n globally stable asymptotic oscillation.

  11. Fixed-point adiabatic quantum search

    Science.gov (United States)

    Dalzell, Alexander M.; Yoder, Theodore J.; Chuang, Isaac L.

    2017-01-01

    Fixed-point quantum search algorithms succeed at finding one of M target items among N total items even when the run time of the algorithm is longer than necessary. While the famous Grover's algorithm can search quadratically faster than a classical computer, it lacks the fixed-point property—the fraction of target items must be known precisely to know when to terminate the algorithm. Recently, Yoder, Low, and Chuang [Phys. Rev. Lett. 113, 210501 (2014), 10.1103/PhysRevLett.113.210501] gave an optimal gate-model search algorithm with the fixed-point property. Previously, it had been discovered by Roland and Cerf [Phys. Rev. A 65, 042308 (2002), 10.1103/PhysRevA.65.042308] that an adiabatic quantum algorithm, operating by continuously varying a Hamiltonian, can reproduce the quadratic speedup of gate-model Grover search. We ask, can an adiabatic algorithm also reproduce the fixed-point property? We show that the answer depends on what interpolation schedule is used, so as in the gate model, there are both fixed-point and non-fixed-point versions of adiabatic search, only some of which attain the quadratic quantum speedup. Guided by geometric intuition on the Bloch sphere, we rigorously justify our claims with an explicit upper bound on the error in the adiabatic approximation. We also show that the fixed-point adiabatic search algorithm can be simulated in the gate model with neither loss of the quadratic Grover speedup nor of the fixed-point property. Finally, we discuss natural uses of fixed-point algorithms such as preparation of a relatively prime state and oblivious amplitude amplification.

  12. Landauer Bound for Analog Computing Systems

    CERN Document Server

    Diamantini, M Cristina; Trugenberger, Carlo A

    2016-01-01

    By establishing a relation between information erasure and continuous phase transitions we generalise the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  13. Words, Concepts, and the Geometry of Analogy

    Directory of Open Access Journals (Sweden)

    Stephen McGregor

    2016-08-01

    Full Text Available This paper presents a geometric approach to the problem of modelling the relationship between words and concepts, focusing in particular on analogical phenomena in language and cognition. Grounded in recent theories regarding geometric conceptual spaces, we begin with an analysis of existing static distributional semantic models and move on to an exploration of a dynamic approach to using high dimensional spaces of word meaning to project subspaces where analogies can potentially be solved in an online, contextualised way. The crucial element of this analysis is the positioning of statistics in a geometric environment replete with opportunities for interpretation.

  14. Landauer bound for analog computing systems

    Science.gov (United States)

    Diamantini, M. Cristina; Gammaitoni, Luca; Trugenberger, Carlo A.

    2016-07-01

    By establishing a relation between information erasure and continuous phase transitions we generalize the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence, every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  15. Fungitoxicity of chemical analogs with heartwood toxins.

    Science.gov (United States)

    Grohs, B M; Kunz, B

    1998-07-01

    Trans-stilbene and tropolone as chemical analogs with naturally occurring fungitoxic heartwood compounds were studied with respect to their fungitoxic potency. While stilbene showed no fungitoxic activity towards the fungi Aureobasidium pullulans var. melanogenum, Penicillium glabrum, and Trichoderma harzianum in the concentrations tested, the minimal inhibiting concentration of tropolone was 10(-3) M for Penicillium glabrum and Trichoderma harzianum, and 10(-5) M for Aureobasidium pullulans var. melanogenum. In all cases, the effect of tropolone was a fungistatic one. Using chemical analogs for assessing the chemical basis of the fungitoxicity of tropolone, this substance proved to be the only compound tested which possesses fungitoxic properties.

  16. Automated Integrated Analog Filter Design Issues

    Directory of Open Access Journals (Sweden)

    Karolis Kiela

    2015-07-01

    Full Text Available An analysis of modern automated integrated analog circuits design methods and their use in integrated filter design is done. Current modern analog circuits automated tools are based on optimization algorithms and/or new circuit generation methods. Most automated integrated filter design methods are only suited to gmC and switched current filter topologies. Here, an algorithm for an active RC integrated filter design is proposed, that can be used in automated filter designs. The algorithm is tested by designing an integrated active RC filter in a 65 nm CMOS technology.

  17. Synthetic heparin-binding factor analogs

    Science.gov (United States)

    Pena, Louis A.; Zamora, Paul O.; Lin, Xinhua; Glass, John D.

    2010-04-20

    The invention provides synthetic heparin-binding growth factor analogs having at least one peptide chain, and preferably two peptide chains branched from a dipeptide branch moiety composed of two trifunctional amino acid residues, which peptide chain or chains bind a heparin-binding growth factor receptor and are covalently bound to a non-signaling peptide that includes a heparin-binding domain, preferably by a linker, which may be a hydrophobic linker. The synthetic heparin-binding growth factor analogs are useful as pharmaceutical agents, soluble biologics or as surface coatings for medical devices.

  18. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  19. Discrete analog computing with rotor-routers.

    Science.gov (United States)

    Propp, James

    2010-09-01

    Rotor-routing is a procedure for routing tokens through a network that can implement certain kinds of computation. These computations are inherently asynchronous (the order in which tokens are routed makes no difference) and distributed (information is spread throughout the system). It is also possible to efficiently check that a computation has been carried out correctly in less time than the computation itself required, provided one has a certificate that can itself be computed by the rotor-router network. Rotor-router networks can be viewed as both discrete analogs of continuous linear systems and deterministic analogs of stochastic processes.

  20. Analog circuit design for communication SOC

    CERN Document Server

    Tu, Steve Hung-Lung

    2012-01-01

    This e-book provides several state-of-the-art analog circuit design techniques. It presents both empirical and theoretical materials for system-on-a-chip (SOC) circuit design. Fundamental communication concepts are used to explain a variety of topics including data conversion (ADC, DAC, S-? oversampling data converters), clock data recovery, phase-locked loops for system timing synthesis, supply voltage regulation, power amplifier design, and mixer design. This is an excellent reference book for both circuit designers and researchers who are interested in the field of design of analog communic

  1. Implementing neural architectures using analog VLSI circuits

    Science.gov (United States)

    Maher, Mary Ann C.; Deweerth, Stephen P.; Mahowald, Misha A.; Mead, Carver A.

    1989-05-01

    Analog very large-scale integrated (VLSI) technology can be used not only to study and simulate biological systems, but also to emulate them in designing artificial sensory systems. A methodology for building these systems in CMOS VLSI technology has been developed using analog micropower circuit elements that can be hierarchically combined. Using this methodology, experimental VLSI chips of visual and motor subsystems have been designed and fabricated. These chips exhibit behavior similar to that of biological systems, and perform computations useful for artificial sensory systems.

  2. Analogies between a Meniscus and a Cantilever

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-Lin

    2009-01-01

    Systematic and quantitative analyses of exact analogies between a meniscus and an elastica are performed. It is shown that the two governing equations take the same style after coordinate translation and scale transformation. The morphologies of the liquid bridge and the cantilever are calculated in terms of elliptic integrations, which can be reduced to the same shape,after converting the boundary conditions. The present analyses can make us grasp the nature of this physical phenomenon deeply and show some inspiration for designing the analogy experiments. Moreover, the calculated results are helpful to engineering applications, such as design and fabrication of MEMS, and micro-manipulations in micro/nano- technology.

  3. ENHANCING THE SYMBOLIC ANALYSIS OF ANALOG CIRCUITS

    Directory of Open Access Journals (Sweden)

    E. Tlelo-Cuautle

    2005-08-01

    Full Text Available A new symbollc-method is introduced to enhance the calculation of symbolic expressions of analog circults. First, the analog circuit is transformed to a nullor equivalent circuit. Second, a new method is introduced to the formulation of a compact system of equations (CSES. Third, a new method is introduced to the solution of theCSES, by avoiding multiplications by zero to Improve the evaluation of determlnants. Flnally, two eXamples are given to show the usefulness of the proposed methods to calculate fully symbolic transfer functions.

  4. An Optical Analog of a Black Holes

    CERN Document Server

    Royston, A; Royston, Andrew; Gass, Richard

    2002-01-01

    Using media with extremely low group velocities one can create an optical analog of a curved space-time. Leonhardt and Piwnicki have proposed that a vortex flow will act as an optical black hole. We show that although the Leonhardt - Piwnicki flow has an orbit of no return and an infinite red-shift surface, it is not a true black hole since it lacks a null hypersurface. However a radial flow will produce a true optical black hole that has a Hawking temperature and obeys the first law of black hole mechanics. By combining the Leonhardt - Piwnicki flow with a radial flow we obtain the analog of the Kerr black hole.

  5. Associative Pattern Recognition In Analog VLSI Circuits

    Science.gov (United States)

    Tawel, Raoul

    1995-01-01

    Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.

  6. [FIXED COMBINATION ATORVASTATIN-EZETIMIBE (ATOZET®)].

    Science.gov (United States)

    Scheen, A J

    2016-01-01

    Cardiovascular prevention in subjects at high or very high risk requires a drastic reduction in LDL cholesterol according to the concept "the lower, the better". The combination of an inhibitor of cholesterol synthesis and a selective inhibitor of intestinal absorption results in a complementary and synergistic LDL-lowering activity. Besides a first fixed combination ezetimibe-simvastatin (Inegy®), a new fixed combination is presented, Atozet® that combines atorvastatin and ezetimibe. Because atorvastatin is more potent than simvastatin, this novel fixed combination should facilitate reaching therapeutic goals in terms of LDL cholesterol amongst patients with severe hypercholesterolaemia and/or at high or very high cardiovascular risk.

  7. Imaginary fixed points can be physical.

    Science.gov (United States)

    Zhong, Fan

    2012-08-01

    It has been proposed that a first-order phase transition driven to happen in the metastable region exhibits scaling and universality near an instability point controlled by an instability fixed point of a φ(3) theory. However, this fixed point has an imaginary value and the renormalization-group flow of the φ(3) coupling diverges at a finite scale. Here combining a momentum-space RG analysis and a nucleation theory near the spinodal point, we show that imaginary rather than real values are physical counterintuitively and thus the imaginary fixed point does control the scaling.

  8. Controlling cyanobacterial blooms in hypertrophic Lake Taihu, China: will nitrogen reductions cause replacement of non-N2 fixing by N2 fixing taxa?

    Directory of Open Access Journals (Sweden)

    Hans W Paerl

    Full Text Available Excessive anthropogenic nitrogen (N and phosphorus (P inputs have caused an alarming increase in harmful cyanobacterial blooms, threatening sustainability of lakes and reservoirs worldwide. Hypertrophic Lake Taihu, China's third largest freshwater lake, typifies this predicament, with toxic blooms of the non-N2 fixing cyanobacteria Microcystis spp. dominating from spring through fall. Previous studies indicate N and P reductions are needed to reduce bloom magnitude and duration. However, N reductions may encourage replacement of non-N2 fixing with N2 fixing cyanobacteria. This potentially counterproductive scenario was evaluated using replicate, large (1000 L, in-lake mesocosms during summer bloom periods. N+P additions led to maximum phytoplankton production. Phosphorus enrichment, which promoted N limitation, resulted in increases in N2 fixing taxa (Anabaena spp., but it did not lead to significant replacement of non-N2 fixing with N2 fixing cyanobacteria, and N2 fixation rates remained ecologically insignificant. Furthermore, P enrichment failed to increase phytoplankton production relative to controls, indicating that N was the most limiting nutrient throughout this period. We propose that Microcystis spp. and other non-N2 fixing genera can maintain dominance in this shallow, highly turbid, nutrient-enriched lake by outcompeting N2 fixing taxa for existing sources of N and P stored and cycled in the lake. To bring Taihu and other hypertrophic systems below the bloom threshold, both N and P reductions will be needed until the legacy of high N and P loading and sediment nutrient storage in these systems is depleted. At that point, a more exclusive focus on P reductions may be feasible.

  9. High-speed and high-resolution analog-to-digital and digital-to-analog converters

    NARCIS (Netherlands)

    van de Plassche, R.J.

    1989-01-01

    Analog-to-digital and digital-to-analog converters are important building blocks connecting the analog world of transducers with the digital world of computing, signal processing and data acquisition systems. In chapter two the converter as part of a system is described. Requirements of analog filte

  10. Future evolution in a backreaction model and the analogous scalar field cosmology

    CERN Document Server

    Ali, Amna

    2016-01-01

    We investigate the future evolution of the universe using the Buchert framework for averaged backreaction in the context of a two-domain partition of the universe. We show that this approach allows for the possibility of the global acceleration vanishing at a finite future time, provided that none of the subdomains accelerate individually. The model at large scales is analogously described in terms of a homogeneous scalar field emerging with a potential that is fixed and free from phenomenological parametrization. The dynamics of this scalar field is explored in the analogous FLRW cosmology. We use observational data from Type Ia Supernovae, Baryon Acoustic Oscillations, and Cosmic Microwave Background to constrain the parameters of the model for a viable cosmology, providing the corresponding likelihood contours.

  11. Robust stochastic maximum principle: Complete proof and discussions

    Directory of Open Access Journals (Sweden)

    Poznyak Alex S.

    2002-01-01

    Full Text Available This paper develops a version of Robust Stochastic Maximum Principle (RSMP applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [14,43-45] as well as on the variation results of [53] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.

  12. Maximum power analysis of photovoltaic module in Ramadi city

    Directory of Open Access Journals (Sweden)

    Majid Shahatha Salim, Jassim Mohammed Najim, Salih Mohammed Salih

    2013-01-01

    Full Text Available Performance of photovoltaic (PV module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  13. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  14. HEALTH INSURANCE: FIXED CONTRIBUTION AND REIMBURSEMENT MAXIMA

    CERN Multimedia

    Human Resources Division

    2001-01-01

    Affected by the salary adjustments on 1 January 2001 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maxima, has changed significantly. An adjustment of the amounts of the reimbursement maxima and the fixed contributions is therefore necessary, as from 1 January 2001. Reimbursement maxima The revised reimbursement maxima will appear on the leaflet summarizing the benefits for the year 2001, which will be sent out with the forthcoming issue of the CHIS Bull'. This leaflet will also be available from the divisional secretariats and from the UNIQA office at CERN. Fixed contributions The fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions) : voluntarily insured member of the personnel, with normal health insurance cover : 910.- (was 815.- in 2000) voluntarily insured member of the personnel, with reduced heal...

  15. Anderson Acceleration for Fixed-Point Iterations

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Homer F. [Worcester Polytechnic Institute, MA (United States)

    2015-08-31

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  16. Topological fixed point theory of multivalued mappings

    CERN Document Server

    Górniewicz, Lech

    1999-01-01

    This volume presents a broad introduction to the topological fixed point theory of multivalued (set-valued) mappings, treating both classical concepts as well as modern techniques. A variety of up-to-date results is described within a unified framework. Topics covered include the basic theory of set-valued mappings with both convex and nonconvex values, approximation and homological methods in the fixed point theory together with a thorough discussion of various index theories for mappings with a topologically complex structure of values, applications to many fields of mathematics, mathematical economics and related subjects, and the fixed point approach to the theory of ordinary differential inclusions. The work emphasises the topological aspect of the theory, and gives special attention to the Lefschetz and Nielsen fixed point theory for acyclic valued mappings with diverse compactness assumptions via graph approximation and the homological approach. Audience: This work will be of interest to researchers an...

  17. Gravitational fixed points from perturbation theory.

    Science.gov (United States)

    Niedermaier, Max R

    2009-09-04

    The fixed point structure of the renormalization flow in higher derivative gravity is investigated in terms of the background covariant effective action using an operator cutoff that keeps track of powerlike divergences. Spectral positivity of the gauge fixed Hessian can be satisfied upon expansion in the asymptotically free higher derivative coupling. At one-loop order in this coupling strictly positive fixed points are found for the dimensionless Newton constant g(N) and the cosmological constant lambda, which are determined solely by the coefficients of the powerlike divergences. The renormalization flow is asymptotically safe with respect to this fixed point and settles on a lambda(g(N)) trajectory after O(10) units of the renormalization mass scale to accuracy 10(-7).

  18. Analogies and Reconstruction of Mathematical Knowledge.

    Science.gov (United States)

    Fast, Gerald R.

    An investigation was conducted to determine the effectiveness of utilizing analogies to effect conceptual change in mathematics. Forty-one high school seniors participated in a knowledge reconstruction process regarding their beliefs about everyday probability situations such as sports events or lotteries. These mathematics students were given…

  19. Analogy and mathematical reasoning : a survey

    OpenAIRE

    Miller, C. D. F.

    1983-01-01

    We survey the literature of Artificial Intelligence, and other related work, pertaining to the modelling of mathematical reasoning and its relationship with the use of analogy. In particular, we discuss the contribution of Lenat's program AM to models of mathematical discovery and concept-formation. We consider the use of similarity measures to structure a knowledge space and their role in concept acquisition.

  20. Analog circuit design designing waveform processing circuits

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The fourth volume in the set Designing Waveform-Processing Circuits builds on the previous 3 volumes and presents a variety of analog non-amplifier circuits, including voltage references, current sources, filters, hysteresis switches and oscilloscope trigger and sweep circuitry, function generation, absolute-value circuits, and peak detectors.

  1. An iconic, analogical approach to grammaticalization

    NARCIS (Netherlands)

    Fischer, O.; Conradie, C.J.; Johl, R.; Beukes, M.; Fischer, O.; Ljungberg, C.

    2010-01-01

    This paper addresses a number of problems connected with the ‘apparatus’ used in grammaticalization theory. It will be argued that we get a better grip on what happens in processes of grammaticalization (and its ‘opposite’, lexicalization) if the process is viewed in terms of analogical processes, w

  2. Generating Analog IC Layouts with LAYGEN II

    CERN Document Server

    Martins, Ricardo M F; Horta, Nuno C G

    2013-01-01

    This book presents an innovative methodology for the automatic generation of analog integrated circuits (ICs) layout, based on template descriptions and on evolutionary computational techniques. A design automation tool, LAYGEN II, was implemented to validate the proposed approach giving special emphasis to reusability of expert design knowledge and to efficiency on retargeting operations.

  3. CMOS circuits for analog signal processing

    NARCIS (Netherlands)

    Wallinga, Hans

    1988-01-01

    Design choices in CMOS analog signal processing circuits are presented. Special attention is focussed on continuous-time filter technologies. The basics of MOSFET-C continuous-time filters and CMOS Square Law Circuits are explained at the hand of a graphical MOST characteristics representation.

  4. Analog Acoustic Expression in Speech Communication

    Science.gov (United States)

    Shintel, Hadas; Nusbaum, Howard C.; Okrent, Arika

    2006-01-01

    We present the first experimental evidence of a phenomenon in speech communication we call "analog acoustic expression." Speech is generally thought of as conveying information in two distinct ways: discrete linguistic-symbolic units such as words and sentences represent linguistic meaning, and continuous prosodic forms convey information about…

  5. Invention through Form and Function Analogy

    Science.gov (United States)

    Rule, Audrey C.

    2015-01-01

    "Invention through Form and Function Analogy" is an invention book for teachers and other leaders working with youth who are involving students in the invention process. The book consists of an introduction and set of nine learning cycle formatted lessons for teaching the principles of invention through the science and engineering design…

  6. C4913 ANALOGE OG DIGITALE FILTRE

    DEFF Research Database (Denmark)

    Gaunholt, Hans

    1996-01-01

    Theese lecture notes treats the fundamental theory and the most commonly used design methods for passive- active and digital filters with special emphasis on microelectronic realizations. The lecture notes covers 75% of the material taught in the course C4913 Analog and Digital Filters...

  7. The GMO-Nanotech (Dis)Analogy?

    Science.gov (United States)

    Sandler, Ronald; Kay, W. D.

    2006-01-01

    The genetically-modified-organism (GMO) experience has been prominent in motivating science, industry, and regulatory communities to address the social and ethical dimensions of nanotechnology. However, there are some significant problems with the GMO-nanotech analogy. First, it overstates the likelihood of a GMO-like backlash against…

  8. Analog of Superradiance effect in BEC

    CERN Document Server

    Basak, S

    2005-01-01

    We investigate the scattering of phase oscillation of Bose-Einstein Condensate by a 'draining of bathtub' type fluid motion. We derive a relation between the reflection and transmission coefficients which exhibits existence of analog of 'Superradiance effect' in BEC vortex with sink.

  9. Classical Analog of Electromagnetically Induced Transparency

    CERN Document Server

    Alzar, C L G; Nussenzveig, P

    2002-01-01

    We present a classical analog for Electromagnetically Induced Transparency (EIT). In a system of just two coupled harmonic oscillators subject to a harmonic driving force we can reproduce the phenomenology observed in EIT. We describe a simple experiment performed with two linearly coupled RLC circuits which can be taught in an undergraduate laboratory class.

  10. Analogy between thermal convective and magnetohydrodynamic instabilities

    Energy Technology Data Exchange (ETDEWEB)

    Valdmanis, Ya.Ya.; Kukainis, O.A.

    1977-01-01

    An examination is made of the analogy between thermo-convective instability and instability produced by various electromagnetic forces both in steady and alternating thermal and electromagnetic fields. An example is given for calculating an assumed bubble instability which could occur in an alternating magnetic field. 17 references.

  11. Fractal Structures For Fixed Mems Capacitors

    KAUST Repository

    Elshurafa, Amro M.

    2014-08-28

    An embodiment of a fractal fixed capacitor comprises a capacitor body in a microelectromechanical system (MEMS) structure. The capacitor body has a first plate with a fractal shape separated by a horizontal distance from a second plate with a fractal shape. The first plate and the second plate are within the same plane. Such a fractal fixed capacitor further comprises a substrate above which the capacitor body is positioned.

  12. On hyperbolic fixed points in ultrametric dynamics

    CERN Document Server

    Lindahl, Karl-Olof; 10.1134/S2070046610030052

    2011-01-01

    Let K be a complete ultrametric field. We give lower and upper bounds for the size of linearization discs for power series over K near hyperbolic fixed points. These estimates are maximal in the sense that there exist examples where these estimates give the exact size of the corresponding linearization disc. In particular, at repelling fixed points, the linearization disc is equal to the maximal disc on which the power series is injective.

  13. Random fixed points and random differential inclusions

    Directory of Open Access Journals (Sweden)

    Nikolaos S. Papageorgiou

    1988-01-01

    Full Text Available In this paper, first, we study random best approximations to random sets, using fixed point techniques, obtaining this way stochastic analogues of earlier deterministic results by Browder-Petryshyn, KyFan and Reich. Then we prove two fixed point theorems for random multifunctions with stochastic domain that satisfy certain tangential conditions. Finally we consider a random differential inclusion with upper semicontinuous orientor field and establish the existence of random solutions.

  14. DNA extraction from formalin-fixed material.

    Science.gov (United States)

    Campos, Paula F; Gilbert, Thomas M P

    2012-01-01

    The principal challenges facing PCR-based analyses of DNA extracted from formalin-fixed materials are fragmentation of the DNA and cross-linked protein-DNA complexes. Here, we present an efficient protocol to extract DNA from formalin-fixed or paraffin-embedded tissues (FFPE). In this protocol, protein-DNA cross-links are reversed using heat and alkali treatment, yielding significantly longer fragments and larger amounts of PCR-amplifiable DNA than standard DNA extraction protocols.

  15. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  16. A dual method for maximum entropy restoration

    Science.gov (United States)

    Smith, C. B.

    1979-01-01

    A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.

  17. Precise Point Positioning with Partial Ambiguity Fixing

    Science.gov (United States)

    Li, Pan; Zhang, Xiaohong

    2015-01-01

    Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate. PMID:26067196

  18. Estimation of chromophoric dissolved organic matter (CDOM) and photosynthetic activity of estuarine phytoplankton using a multiple-fixed-wavelength spectral fluorometer.

    Science.gov (United States)

    Goldman, Emily A; Smith, Erik M; Richardson, Tammi L

    2013-03-15

    The utility of a multiple-fixed-wavelength spectral fluorometer, the Algae Online Analyser (AOA), as a means of quantifying chromophoric dissolved organic matter (CDOM) and phytoplankton photosynthetic activity was tested using algal cultures and natural communities from North Inlet estuary, South Carolina. Comparisons of AOA measurements of CDOM to those by spectrophotometry showed a significant linear relationship, but increasing amounts of background CDOM resulted in progressively higher over-estimates of chromophyte contributions to a simulated mixed algal community. Estimates of photosynthetic activity by the AOA at low irradiance (≈ 80 μmol quanta m(-2) s(-1)) agreed well with analogous values from the literature for the chlorophyte, Dunaliella tertiolecta, but were substantially lower than previous measurements of the maximum quantum efficiency of photosystem II (F(v)/F(m)) in Thalassiosira weissflogii (a diatom) and Rhodomonas salina (a cryptophyte). When cells were exposed to high irradiance (1500 μmol quanta m(-2) s(-1)), declines in photosynthetic activity with time measured by the AOA mirrored estimates of cellular fluorescence capacity using the herbicide 3'-(3, 4-dichlorophenyl)-1',1'-dimethyl urea (DCMU). The AOA shows promise as a tool for the continuous monitoring of phytoplankton community composition, CDOM, and the group-specific photosynthetic activity of aquatic ecosystems.

  19. The maximum entropy technique. System's statistical description

    CERN Document Server

    Belashev, B Z

    2002-01-01

    The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered

  20. Solar Wind Sputtering of Lunar Soil Analogs: The Effect of Ionic Charge and Mass

    Science.gov (United States)

    Hijazi, H.; Bannister, M. E.; Meyer, F. W.; Rouleau, C. M.; Barghouty, A. F.; Rickman, D. L.; Hijazi, H.

    2014-01-01

    In this contribution we report sput-tering measurements of anorthite, an analog material representative of the lunar highlands, by singly and multicharged ions representative of the solar wind. The ions investigated include protons, as well as singly and multicharged Ar ions (as proxies for the heavier solar wind constituents), in the charge state range +1 to +9, and had a fixed solar-wind-relevant impact velocity of approximately 310 km/s or 500 eV/ amu. The goal of the measurements was to determine the sputtering contribution of the heavy, multicharged minority solar wind constituents in comparison to that due to the dominant H+ fraction.

  1. Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf;

    2012-01-01

    The stretch factor and maximum detour of a graph G embedded in a metric space measure how well G approximates the minimum complete graph containing G and the metric space, respectively. In this paper we show that computing the stretch factor of a rectilinear path in L 1 plane has a lower bound of Ω......(n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ ... compute the stretch factor or maximum detour of trees and cycles in O(σn log d+1 n) time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane. © 2012 World Scientific...

  2. Maximum Power Point Tracking Controller for Thermoelectric Generators with Peak Gain Control of Boost DC-DC Converters

    Science.gov (United States)

    Park, Jungyong; Kim, Shiho

    2012-06-01

    An analog maximum power point tracking (MPPT) circuit for a thermoelectric generator (TEG) is proposed. We show that the peak point of the voltage conversion gain of a boost DC-DC converter with an input voltage source having an internal resistor is the maximum power point of the TEG. The key characteristic of the proposed MPPT controller is that the duty ratio of the input clock pulse to the boost DC-DC converter shifts toward the maximum power point of the TEG by seeking the peak gain point of the boost DC-DC converters. The proposed MPPT technique provides a simple and useful analog MPPT solution, without employing digital microcontroller units.

  3. SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH

    Directory of Open Access Journals (Sweden)

    Pandya A M

    2011-04-01

    Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70

  4. Exploring high-density baryonic matter: Maximum freeze-out density

    Energy Technology Data Exchange (ETDEWEB)

    Randrup, Joergen [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)

    2016-08-15

    The hadronic freeze-out line is calculated in terms of the net baryon density and the energy density instead of the usual T and μ{sub B}. This analysis makes it apparent that the freeze-out density exhibits a maximum as the collision energy is varied. This maximum freeze-out density has μ{sub B} = 400 - 500 MeV, which is above the critical value, and it is reached for a fixed-target bombarding energy of 20-30 GeV/N well within the parameters of the proposed NICA collider facility. (orig.)

  5. Estimation of bias errors in measured airplane responses using maximum likelihood method

    Science.gov (United States)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  6. MAXIMUM PRINCIPLE FOR THE OPTIMAL CONTROL OF AN ABLATION-TRANSPIRATION COOLING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    SUN Bing; GUO Baozhu

    2005-01-01

    This paper is concerned with an optimal control problem of an ablationtranspiration cooling control system with Stefan-Signorini boundary condition. The existence of weak solution of the system is considered. The Dubovitskii and Milyutin approach is adopted in the investigation of the Pontryagin's maximum principle of the system. The optimality necessary condition is presented for the problem with fixed final horizon and phase constraints.

  7. Asymptotic normality and strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YIN; Changming; ZHAO; Lincheng; WEI; Chengdong

    2006-01-01

    In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.

  8. A novel hybrid-maximum neural network in stereo-matching process.

    Science.gov (United States)

    Laskowski, Lukasz

    2013-01-01

    In the present paper, the completely innovative architecture of artificial neural network based on Hopfield structure for solving a stereo-matching problem-hybrid neural network, consisting of the classical analog Hopfield neural network and the Maximum Neural Network-is described. The application of this kind of structure as a part of assistive device for visually impaired individuals is considered. The role of the analog Hopfield network is to find the attraction area of the global minimum, whereas Maximum Neural Network is finding accurate location of this minimum. The network presented here is characterized by an extremely high rate of work performance with the same accuracy as a classical Hopfield-like network, which makes it possible to use this kind of structure as a part of systems working in real time. The network considered here underwent experimental tests with the use of real stereo pictures as well as simulated stereo images. This enables error calculation and direct comparison with the classic analog Hopfield neural network as well as other networks proposed in the literature.

  9. Infinite-randomness fixed points for chains of non-Abelian quasiparticles.

    Science.gov (United States)

    Bonesteel, N E; Yang, Kun

    2007-10-05

    One-dimensional chains of non-Abelian quasiparticles described by SU(2)k Chern-Simons-Witten theory can enter random singlet phases analogous to that of a random chain of ordinary spin-1/2 particles (corresponding to k-->infinity). For k=2 this phase provides a random singlet description of the infinite-randomness fixed point of the critical transverse field Ising model. The entanglement entropy of a region of size L in these phases scales as S(L) approximately lnd/3 log(2)L for large L, where d is the quantum dimension of the particles.

  10. Photoelectrochemical properties of a dinitrogen-fixing iron titanate thin film.

    Science.gov (United States)

    Rusina, Olga; Macyk, Wojciech; Kisch, Horst

    2005-06-02

    The band edge positions of a nitrogen-fixing nanostructured semiconductor thin film are determined both in the dark through spectroelectrochemistry and under irradiation by photovoltage measurements. Both methods afford the same result indicating that the film in addition to the dinitrogen-fixing phase Fe2Ti2O7 also contains titanium dioxide. Thus, both methods enable the analysis of a mixture of semiconducting thin films. For pH 7, values of -0.4 and +1.6 V were estimated for the conduction and valence band edge of the iron titanate film, respectively. A 3-fold photocurrent increase by methanol was observed only when the film was calcined at 600 degrees C but not below or above this temperature; the films calcined at temperatures other than 600 degrees C were also inactive in the photoreduction of dinitrogen. For a matter of comparison, an iron(III) oxide film was characterized analogously.

  11. Maximum entropy, word-frequency, Chinese characters, and multiple meanings.

    Science.gov (United States)

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.

  12. Maximum magnitude earthquakes induced by fluid injection

    Science.gov (United States)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  13. Computing Rooted and Unrooted Maximum Consistent Supertrees

    CERN Document Server

    van Iersel, Leo

    2009-01-01

    A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.

  14. Maximum permissible voltage of YBCO coated conductors

    Science.gov (United States)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.

    2014-06-01

    Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  15. Inadvertent tooth movement with fixed lingual retainers.

    Science.gov (United States)

    Shaughnessy, Timothy G; Proffit, William R; Samara, Said A

    2016-02-01

    Fixed retainers are effective in maintaining the alignment of the anterior teeth more than 90% of the time, but they can produce inadvertent tooth movement that in the most severe instances requires orthodontic retreatment managed with a periodontist. This is different from relapse into crowding when a fixed retainer is lost. These problems arise when the retainer breaks but remains bonded to some or all teeth, or when an intact retainer is distorted by function or was not passive when bonded. In both instances, torque of the affected teeth is the predominant outcome. A fixed retainer made with dead soft wire is the least likely to create torque problems but is the most likely to break. Highly flexible twist wires bonded to all the teeth appear to be the most likely to produce inadvertent tooth movement, but this also can occur with stiffer wires bonded only to the canines. Orthodontists, general dentists, and patients should be aware of possible problems with fixed retainers, especially those with all teeth bonded, because the patient might not notice partial debonding. Regular observations of patients wearing fixed retainers by orthodontists in the short term and family dentists in the long term are needed.

  16. Fixed point theory in metric type spaces

    CERN Document Server

    Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco

    2015-01-01

    Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...

  17. The computation of fixed points and applications

    CERN Document Server

    Todd, Michael J

    1976-01-01

    Fixed-point algorithms have diverse applications in economics, optimization, game theory and the numerical solution of boundary-value problems. Since Scarf's pioneering work [56,57] on obtaining approximate fixed points of continuous mappings, a great deal of research has been done in extending the applicability and improving the efficiency of fixed-point methods. Much of this work is available only in research papers, although Scarf's book [58] gives a remarkably clear exposition of the power of fixed-point methods. However, the algorithms described by Scarf have been super~eded by the more sophisticated restart and homotopy techniques of Merrill [~8,~9] and Eaves and Saigal [1~,16]. To understand the more efficient algorithms one must become familiar with the notions of triangulation and simplicial approxi- tion, whereas Scarf stresses the concept of primitive set. These notes are intended to introduce to a wider audience the most recent fixed-point methods and their applications. Our approach is therefore ...

  18. Fixed points of symplectic periodic flows

    CERN Document Server

    Pelayo, Alvaro

    2010-01-01

    The study of fixed points is a classical subject in geometry and dynamics. If the circle acts in a Hamiltonian fashion on a compact symplectic manifold M, then it is classically known that there are at least 1 + dim(M)/2 fixed points; this follows from Morse theory for the momentum map of the action. In this paper we use Atiyah-Bott-Berline-Vergne (ABBV) localization in equivariant cohomology to prove that this conclusion also holds for symplectic circle actions with non-empty fixed sets, as long as the Chern class map is somewhere injective -- the Chern class map assigns to a fixed point the sum of the action weights at the point. We complement this result with less sharp lower bounds on the number of fixed points, under no assumptions; from a dynamical systems viewpoint, our results imply that there is no symplectic periodic flow with exactly one or two equilibrium points on a compact manifold of dimension at least eight.

  19. Playing with a double-edged sword: Analogies in biochemistry

    Science.gov (United States)

    Orgill, Marykay

    Analogy pervades our everyday reasoning. No situation we encounter is exactly like a situation we have encountered previously, and our ability to learn and survive in the world is based on our ability to find similarities between past and present situations and use the knowledge we have gained from past situations to manage current situations. Analogies can be powerful teaching tools because they can make new material intelligible to students by comparing it to material that is already familiar. It is clear, though, that not all analogies are good and that not all good analogies are useful to all students. In this study, I have used textbook analysis, classroom observations, student interviews and instructor interviews to determine the role that analogies play in biochemistry learning. Analogies are an important teaching technique in biochemistry classes, being used more often in both biochemistry classes and textbooks than they are in high school chemistry classes and textbooks. Most biochemistry students like, pay particular attention to, and remember the analogies their instructors provide; and they use these analogies to understand, visualize, and recall information from class. Even though students like and use analogies, they do not understand what analogies are or the mechanism by which they improve learning. For the students, analogies are simply any teaching technique that eases understanding, visualization, or recall. Instructors, on the other hand, have a good understanding of what analogies are and of how they should be presented in class; but they do not use analogies as effectively as they should. They do not plan, explain or identify the limitations of the analogies they use in class. However, regardless of how effectively instructors present analogies in class, this study indicates that, in general, analogies are useful in promoting understanding, visualization, recall, and motivation in biochemistry students at all levels. They would be even more

  20. Ecological consequences of the expansion of N2-fixing plants in cold biomes

    Science.gov (United States)

    Hiltbrunner, Erika; Aerts, Rien; Bühlmann, Tobias; Huss-Danell, Kerstin; Magnusson, Borgthor; Myrold, David D.; Reed, Sasha C.; Sigurdsson, Bjarni D.; Körner, Christian

    2014-01-01

    Research in warm-climate biomes has shown that invasion by symbiotic dinitrogen (N2)-fixing plants can transform ecosystems in ways analogous to the transformations observed as a consequence of anthropogenic, atmospheric nitrogen (N) deposition: declines in biodiversity, soil acidification, and alterations to carbon and nutrient cycling, including increased N losses through nitrate leaching and emissions of the powerful greenhouse gas nitrous oxide (N2O). Here, we used literature review and case study approaches to assess the evidence for similar transformations in cold-climate ecosystems of the boreal, subarctic and upper montane-temperate life zones. Our assessment focuses on the plant genera Lupinus and Alnus, which have become invasive largely as a consequence of deliberate introductions and/or reduced land management. These cold biomes are commonly located in remote areas with low anthropogenic N inputs, and the environmental impacts of N2-fixer invasion appear to be as severe as those from anthropogenic N deposition in highly N polluted areas. Hence, inputs of N from N2 fixation can affect ecosystems as dramatically or even more strongly than N inputs from atmospheric deposition, and biomes in cold climates represent no exception with regard to the risk of being invaded by N2-fixing species. In particular, the cold biomes studied here show both a strong potential to be transformed by N2-fixing plants and a rapid subsequent saturation in the ecosystem’s capacity to retain N. Therefore, analogous to increases in N deposition, N2-fixing plant invasions must be deemed significant threats to biodiversity and to environmental quality.

  1. Ecological consequences of the expansion of N₂-fixing plants in cold biomes.

    Science.gov (United States)

    Hiltbrunner, Erika; Aerts, Rien; Bühlmann, Tobias; Huss-Danell, Kerstin; Magnusson, Borgthor; Myrold, David D; Reed, Sasha C; Sigurdsson, Bjarni D; Körner, Christian

    2014-09-01

    Research in warm-climate biomes has shown that invasion by symbiotic dinitrogen (N2)-fixing plants can transform ecosystems in ways analogous to the transformations observed as a consequence of anthropogenic, atmospheric nitrogen (N) deposition: declines in biodiversity, soil acidification, and alterations to carbon and nutrient cycling, including increased N losses through nitrate leaching and emissions of the powerful greenhouse gas nitrous oxide (N2O). Here, we used literature review and case study approaches to assess the evidence for similar transformations in cold-climate ecosystems of the boreal, subarctic and upper montane-temperate life zones. Our assessment focuses on the plant genera Lupinus and Alnus, which have become invasive largely as a consequence of deliberate introductions and/or reduced land management. These cold biomes are commonly located in remote areas with low anthropogenic N inputs, and the environmental impacts of N2-fixer invasion appear to be as severe as those from anthropogenic N deposition in highly N polluted areas. Hence, inputs of N from N2 fixation can affect ecosystems as dramatically or even more strongly than N inputs from atmospheric deposition, and biomes in cold climates represent no exception with regard to the risk of being invaded by N2-fixing species. In particular, the cold biomes studied here show both a strong potential to be transformed by N2-fixing plants and a rapid subsequent saturation in the ecosystem's capacity to retain N. Therefore, analogous to increases in N deposition, N2-fixing plant invasions must be deemed significant threats to biodiversity and to environmental quality.

  2. Analog Computation by DNA Strand Displacement Circuits.

    Science.gov (United States)

    Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John

    2016-08-19

    DNA circuits have been widely used to develop biological computing devices because of their high programmability and versatility. Here, we propose an architecture for the systematic construction of DNA circuits for analog computation based on DNA strand displacement. The elementary gates in our architecture include addition, subtraction, and multiplication gates. The input and output of these gates are analog, which means that they are directly represented by the concentrations of the input and output DNA strands, respectively, without requiring a threshold for converting to Boolean signals. We provide detailed domain designs and kinetic simulations of the gates to demonstrate their expected performance. On the basis of these gates, we describe how DNA circuits to compute polynomial functions of inputs can be built. Using Taylor Series and Newton Iteration methods, functions beyond the scope of polynomials can also be computed by DNA circuits built upon our architecture.

  3. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  4. Parabolic flight as a spaceflight analog.

    Science.gov (United States)

    Shelhamer, Mark

    2016-06-15

    Ground-based analog facilities have had wide use in mimicking some of the features of spaceflight in a more-controlled and less-expensive manner. One such analog is parabolic flight, in which an aircraft flies repeated parabolic trajectories that provide short-duration periods of free fall (0 g) alternating with high-g pullout or recovery phases. Parabolic flight is unique in being able to provide true 0 g in a ground-based facility. Accordingly, it lends itself well to the investigation of specific areas of human spaceflight that can benefit from this capability, which predominantly includes neurovestibular effects, but also others such as human factors, locomotion, and medical procedures. Applications to research in artificial gravity and to effects likely to occur in upcoming commercial suborbital flights are also possible.

  5. A new program on digitizing analog seismograms

    Science.gov (United States)

    Wang, Maofa; Jiang, Qigang; Liu, Qingjie; Huang, Meng

    2016-08-01

    Historical seismograms contain a great variety of useful information which can be used in the study of earthquakes. It is necessary for researchers to digitize analog records and extract the information just as modern computing analysis requires. Firstly, an algorithm based on color scene filed method is presented in order to digitize analog seismograms. Secondly, an interactive software program using C# has been developed to digitize seismogram traces from raster files quickly and accurately. The program can deal with gray-scale images stored in a suitable file format and it offers two different methods: manual digitization and automatic digitization. The test result of the program shows that the methods presented in this paper can lead to good performance.

  6. Synthesis and Evaluation of Desmethyl Azumamide Analogs

    DEFF Research Database (Denmark)

    Maolanon, Alex

    the azumamide analogs; however, removal of the methyl group had a significant impact relative to the natural products. To understand this effect, the NMR structure was solved with the assistance from Casper Hoeck and Charlotte H. Gotfredsen and docked conformations were obtained from Niels J. Christensen...... and Peter Fristrup. Compared to the natural compounds, the 3Dstructure of the scaffold in the azumamide analogs were similar. Although a conclusion was not found, the preliminary docking results indicated favorable lipophilic interaction with the methyl group in the azumamides. Largazole is another...... macrocylic natural product with HDAC inhibitory activity. The compound has a thioester functionality in the side chain, which is hydrolyzed before interaction with the enzymes. In the attempt to mimic the prodrug nature of largazole, compounds containing a thiol group were designed, as it was hypothesized...

  7. Interference Alignment with Analog Channel State Feedback

    CERN Document Server

    Ayach, Omar El

    2010-01-01

    Interference alignment (IA) is a multiplexing gain optimal transmission strategy for the interference channel with an arbitrary number of users. While the achieved sum rate with IA is much higher than previously thought possible, the improvement comes at the cost of requiring network channel state information at the transmitters. This can be achieved by explicit feedback, a flexible yet costly approach that incurs large overhead and limits throughput. We propose using analog feedback as an alternative to limited feedback or reciprocity based alignment. We show that the full multiplexing gain observed with perfect channel knowledge is preserved by analog feedback and the mean loss in sum rate is bounded by a constant when signal-to-noise ratio is comparable in both forward and feedback channels. When such feedback quality is not quite possible, a fraction of the degrees of freedom is achieved. We consider the overhead of training and feedback and use this framework to optimize the system's effective throughput...

  8. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky.......EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic...

  9. Maximum Multiflow in Wireless Network Coding

    CERN Document Server

    Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao

    2012-01-01

    In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.

  10. Quantum Electric Circuits Analogous to Ballistic Conductors

    OpenAIRE

    2007-01-01

    The conductance steps in a constricted two-dimensional electron gas and the minimum conductivity in graphene are related to a new uncertainty relation between electric charge and conductance in a quantized electric circuit that mimics the electric transport in mesoscopic systems. This uncertainty relation makes specific use of the discreteness of electric charge. Quantum electric circuits analogous to both constricted two-dimensional electron gas and graphene are introduced. In the latter cas...

  11. An introduction to analog and digital communications

    CERN Document Server

    Haykin, Simon

    2012-01-01

    The second edition of this accessible book provides readers with an introductory treatment of communication theory as applied to the transmission of information-bearing signals. While it covers analog communications, the emphasis is placed on digital technology. It begins by presenting the functional blocks that constitute the transmitter and receiver of a communication system. Readers will next learn about electrical noise and then progress to multiplexing and multiple access techniques.

  12. Analog of landau Levels to Electric Dipole

    CERN Document Server

    Ribeiro, L R; Nascimento, J R; Furtado, Claudio

    2006-01-01

    In this article we discuss the analogy between the dynamics of a neutral particle with an electric dipole, in the presence of configuration of magnetic field, with Landau level quantization for charged particle. We analyze this quantization based on the He-Mckelar-Wilkens interaction developed of similar way that Ericsson and Sj\\"oqvist[Phys Rev. A {\\bf 65} 013607 (2001)] was analyzed the Landau-Aharonov-Casher effect. The energy level and eingenfuctions and eigenvalues are obtained.

  13. Neurotoxic Alkaloids: Saxitoxin and Its Analogs

    OpenAIRE

    Mihali, Troco K; Moffitt, Michelle C.; Neilan, Brett A.; Maria Wiese; D’Agostino, Paul M.

    2010-01-01

    Saxitoxin (STX) and its 57 analogs are a broad group of natural neurotoxic alkaloids, commonly known as the paralytic shellfish toxins (PSTs). PSTs are the causative agents of paralytic shellfish poisoning (PSP) and are mostly associated with marine dinoflagellates (eukaryotes) and freshwater cyanobacteria (prokaryotes), which form extensive blooms around the world. PST producing dinoflagellates belong to the genera Alexandrium, Gymnodinium and Pyrodinium whilst production has been identified...

  14. Analog-digital codesign using coarse quantization

    Science.gov (United States)

    Kokkeler, Andre Bernardus Joseph

    With regards to electronic systems, two important trends can be observed. The first trend is generally known as Moore's law: the digital processing capacity per chip is increasing a factor two every 18 months. Another part of the first trend is that the performance increase of integrated linear or analog processing is slow, a factor two every 4.7 years. The second trend is that the rate of data exchange between electronic systems is increasing rapidly. Because of these high data rates especially the design of data converters from analog to digital (ADCs) is demanding. For a specific set of applications, the requirements for the ADC can be relaxed by reducing the resolution of the conversion from analog to digital. Within these specific applications, signal characteristics rather than instantaneous values of the signal are determined. Reducing the resolution to an extreme extend is called 'coarse quantization'. The design of mixed signal systems is guided by a Y-chart design methodology. Analog-Digital Codesign, guided by the Y-chart approach, leads to mixed-signal systems with reduced costs compared to systems designed with the traditional methodology. The Y-chart approach also enables the use of coarse quantization as an additional design parameter to further reduce costs. This is illustrated by two case studies. The first case study concentrates on the design of a digital predistorter for Power Amplifiers (PAs) in telecommunication transmitters. In the second case study, we reconsider the design of a part of a Radio Telescope, used for Radio Astronomy. This part is called the Tied Array Adder and it sums signals from different telescopes. Both case studies show that coarse quantization can lead to mixed-signal systems with lower costs but system parameters will change. The explicit reconsideration of functional specifications, facilitated by the Y-chart approach, is therefore essential for the introduction of coarse quantization.

  15. HOW TO USE ANALOGIES FOR BREAKTHROUGH INNOVATIONS

    OpenAIRE

    CORNELIUS HERSTATT; KATHARINA KALOGERAKIS

    2005-01-01

    Analogies can trigger breakthrough ideas in new product development. Numerous examples demonstrate that substantial innovations often result from transferring problem solutions from one industry or domain to another. For instance, the designers of the new running show generation of Nike, "Nike SHOX", use the same suspension concept like the technologies applied for formula 1 racing cars, or the biological Lotus-effect leading to the evelopment of various self-cleaning surfaces. Academic resea...

  16. How to use analogies for breakthrough innovations

    OpenAIRE

    Schild, Katharina; Herstatt, Cornelius; Lüthje, Christian

    2004-01-01

    Analogies can trigger breakthrough ideas in new product development. Numerous examples demonstrate that substantial innovations often result from transferring problem solutions from one industry or domain to another. For instance, the designers of the new running shoe generation of Nike, “Nike SHOX”, use the same suspension concept like the technologies applied for Formula 1 racing cars, or the biological Lotus-effect led to the development of various self-cleaning surfaces. Academic resea...

  17. Transition State Analog Inhibitors for Esterases.

    Science.gov (United States)

    1983-06-02

    Propanones." SCIENTIFIC PERSONNEL SUPPORTED BY THIS PROJECT AND DEGREES AWARDED DURING THIS REPORTING PERIOD Dr. Alan Dafforn Dr. Antoon Brouwer Dr. John P...294, Raven Press, New York. 11. Hansch, C. and Leo , A., (1979) "Substituent Constants for Correlation Analysis in Chemistry and Biology," pp. 69-70...BORONIC ACIDS AS 1INSITION STATE ANALOG INHIBITORS OF ACTYLCHOLINESTERASE by Alan Dafforn and Antoon C. Brouwer Department of Chemistry Bowling Green

  18. Survey of Evaluated Isobaric Analog States

    Energy Technology Data Exchange (ETDEWEB)

    MacCormick, M., E-mail: maccorm@ipno.in2p3.fr [Institut de Physique Nucléaire, CNRS/IN2P3, Université Paris-Sud, 91406 Orsay CEDEX (France); Audi, G. [Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse, CNRS/IN2P3, Université Paris-Sud, Bât. 108, F-91405 Orsay Campus (France)

    2014-06-15

    Isobaric analog states (IAS) can be used to estimate the masses of members belonging to the same isospin multiplet. Experimental and estimated IAS have been used frequently within the Atomic Mass Evaluation (AME) in the past, but the associated set of evaluated masses have been published for the first time in AME2012 and NUBASE2012. In this paper the current trends of the isobaric multiplet mass equation (IMME) coefficients are shown. The T = 2 multiplet is used as a detailed illustration.

  19. Biochemical characterization and cellular imaging of a novel, membrane permeable fluorescent cAMP analog

    Directory of Open Access Journals (Sweden)

    Zaccolo Manuela

    2008-06-01

    Full Text Available Abstract Background A novel fluorescent cAMP analog (8-[Pharos-575]- adenosine-3', 5'-cyclic monophosphate was characterized with respect to its spectral properties, its ability to bind to and activate three main isoenzymes of the cAMP-dependent protein kinase (PKA-Iα, PKA-IIα, PKA-IIβ in vitro, its stability towards phosphodiesterase and its ability to permeate into cultured eukaryotic cells using resonance energy transfer based indicators, and conventional fluorescence imaging. Results The Pharos fluorophore is characterized by a Stokes shift of 42 nm with an absorption maximum at 575 nm and the emission peaking at 617 nm. The quantum yield is 30%. Incubation of the compound to RIIα and RIIβ subunits increases the amplitude of excitation and absorption maxima significantly; no major change was observed with RIα. In vitro binding of the compound to RIα subunit and activation of the PKA-Iα holoenzyme was essentially equivalent to cAMP; RII subunits bound the fluorescent analog up to ten times less efficiently, resulting in about two times reduced apparent activation constants of the holoenzymes compared to cAMP. The cellular uptake of the fluorescent analog was investigated by cAMP indicators. It was estimated that about 7 μM of the fluorescent cAMP analog is available to the indicator after one hour of incubation and that about 600 μM of the compound had to be added to intact cells to half-maximally dissociate a PKA type IIα sensor. Conclusion The novel analog combines good membrane permeability- comparable to 8-Br-cAMP – with superior spectral properties of a modern, red-shifted fluorophore. GFP-tagged regulatory subunits of PKA and the analog co-localized. Furthermore, it is a potent, PDE-resistant activator of PKA-I and -II, suitable for in vitro applications and spatial distribution evaluations in living cells.

  20. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  1. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...

  2. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  3. Maximum confidence measurements via probabilistic quantum cloning

    Institute of Scientific and Technical Information of China (English)

    Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu

    2013-01-01

    Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.

  4. The Wiener maximum quadratic assignment problem

    CERN Document Server

    Cela, Eranda; Woeginger, Gerhard J

    2011-01-01

    We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.

  5. Realization of Minimum and Maximum Gate Function in Ta2O5-based Memristive Devices

    Science.gov (United States)

    Breuer, Thomas; Nielen, Lutz; Roesgen, Bernd; Waser, Rainer; Rana, Vikas; Linn, Eike

    2016-04-01

    Redox-based resistive switching devices (ReRAM) are considered key enablers for future non-volatile memory and logic applications. Functionally enhanced ReRAM devices could enable new hardware concepts, e.g. logic-in-memory or neuromorphic applications. In this work, we demonstrate the implementation of ReRAM-based fuzzy logic gates using Ta2O5 devices to enable analogous Minimum and Maximum operations. The realized gates consist of two anti-serially connected ReRAM cells offering two inputs and one output. The cells offer an endurance up to 106 cycles. By means of exemplary input signals, each gate functionality is verified and signal constraints are highlighted. This realization could improve the efficiency of analogous processing tasks such as sorting networks in the future.

  6. Theory of analogous force on number sets

    Science.gov (United States)

    Canessa, Enrique

    2003-10-01

    A general statistical thermodynamic theory that considers given sequences of x-integers to play the role of particles of known type in an isolated elastic system is proposed. By also considering some explicit discrete probability distributions px for natural numbers, we claim that they lead to a better understanding of probabilistic laws associated with number theory. Sequences of numbers are treated as the size measure of finite sets. By considering px to describe complex phenomena, the theory leads to derive a distinct analogous force fx on number sets proportional to (∂ px/∂ x) T at an analogous system temperature T. In particular, this leads to an understanding of the uneven distribution of integers of random sets in terms of analogous scale invariance and a screened inverse square force acting on the significant digits. The theory also allows to establish recursion relations to predict sequences of Fibonacci numbers and to give an answer to the interesting theoretical question of the appearance of the Benford's law in Fibonacci numbers. A possible relevance to prime numbers is also analyzed.

  7. Fixed drug eruption due to paracetamol

    Directory of Open Access Journals (Sweden)

    Anjali Kushwah

    2013-12-01

    Full Text Available Fixed drug eruption is a common type of drug eruption seen in dermatology OPD’s. Usually it is seen with sulphonamides, salicylates, tetracyclines, oxyphenbutazones, dapsone, barbiturates, phenolphthalein, morphine, codeine, quinine, phenacetin, erythromycin, griseofulvin, mebendazole etc. We hereby report a case of fixed drug eruption due to single dose of oral paracetamol in an otherwise healthy male after one hour of consuming it. A provisional diagnosis of Paracetamol induced fixed drug eruption was made. Paracetamol was stopped and patient advised never to take Paracetamol in future. Patient was managed with prednisolone 10mg /day, cetirizine 10 mg/day, and amoxicillin 500 mg twice a day and mometasone + fusidic acid cream to be applied over the lesions. [Int J Basic Clin Pharmacol 2013; 2(6.000: 833-835

  8. Fixed target facility at the SSC

    Energy Technology Data Exchange (ETDEWEB)

    Loken, S.C.; Morfin, J.G.

    1985-01-01

    The question of whether a facility for fixed target physics should be provided at the SSC must be answered before the final technical design of the SSC can be completed, particularly if the eventual form of extraction would influence the magnet design. To this end, an enthusiastic group of experimentalists, theoreticians and accelerator specialists have studied this point. The accelerator physics issues were addressed by a group led by E. Colton whose report is contained in these proceedings. The physics addressable by fixed target was considered by many of the Physics area working groups and in particular by the Structure Function Group. This report is the summary of the working group which considered various SSC fixed target experiments and determined which types of beams and detectors would be required. 13 references, 5 figures.

  9. Fixed point theory of parametrized equivariant maps

    CERN Document Server

    Ulrich, Hanno

    1988-01-01

    The first part of this research monograph discusses general properties of G-ENRBs - Euclidean Neighbourhood Retracts over B with action of a compact Lie group G - and their relations with fibrations, continuous submersions, and fibre bundles. It thus addresses equivariant point set topology as well as equivariant homotopy theory. Notable tools are vertical Jaworowski criterion and an equivariant transversality theorem. The second part presents equivariant cohomology theory showing that equivariant fixed point theory is isomorphic to equivariant stable cohomotopy theory. A crucial result is the sum decomposition of the equivariant fixed point index which provides an insight into the structure of the theory's coefficient group. Among the consequences of the sum formula are some Borsuk-Ulam theorems as well as some folklore results on compact Lie-groups. The final section investigates the fixed point index in equivariant K-theory. The book is intended to be a thorough and comprehensive presentation of its subjec...

  10. Using Analogs for Chemistry Problem Solving: Does It Increase Understanding?

    Science.gov (United States)

    Friedel, Arthur W.; And Others

    1990-01-01

    Discussed is the effectiveness of using analogies in chemistry instruction. Students' mathematics anxiety, spatial visualization skill, and proportional reasoning ability were found to be important aptitudes for determining chemistry achievement. The relationship between analogs and algorithms is described. (KR)

  11. Creative Analogy Use in a Heterogeneous Design Team

    DEFF Research Database (Denmark)

    Christensen, Bo; Ball, Linden J.

    2016-01-01

    the design dialogue derived from team members with highly disparate educational backgrounds. Our analyses revealed that analogies that matched (versus mismatched) educational backgrounds were generated and revisited more frequently, presumably because they were more accessible. Matching analogies were also...

  12. Duan's fixed point theorem: Proof and generalization

    Directory of Open Access Journals (Sweden)

    Arkowitz Martin

    2006-01-01

    Full Text Available Let be an H-space of the homotopy type of a connected, finite CW-complex, any map and the th power map. Duan proved that has a fixed point if . We give a new, short and elementary proof of this. We then use rational homotopy to generalize to spaces whose rational cohomology is the tensor product of an exterior algebra on odd dimensional generators with the tensor product of truncated polynomial algebras on even dimensional generators. The role of the power map is played by a -structure as defined by Hemmi-Morisugi-Ooshima. The conclusion is that and each has a fixed point.

  13. Fixed Wordsize Implementation of Lifting Schemes

    Science.gov (United States)

    Karp, Tanja

    2006-12-01

    We present a reversible nonlinear discrete wavelet transform with predefined fixed wordsize based on lifting schemes. Restricting the dynamic range of the wavelet domain coefficients due to a fixed wordsize may result in overflow. We show how this overflow has to be handled in order to maintain reversibility of the transform. We also perform an analysis on how large a wordsize of the wavelet coefficients is needed to perform optimal lossless and lossy compressions of images. The scheme is advantageous to well-known integer-to-integer transforms since the wordsize of adders and multipliers can be predefined and does not increase steadily. This also results in significant gains in hardware implementations.

  14. Fix og færdig

    Directory of Open Access Journals (Sweden)

    Jens Pedersen

    2012-12-01

    Full Text Available Fix & Finish “WHO CAN FIX IT?” is an investigation of the needles left behind by drug users in Copenhagen’s Vesterbro district. Based on the praxiological methods developed by Annemarie Mol as well as processes of objectification as described by Daniel Miller, the used needle appears to be a multiple object that is related to opportunities, fear, good intentions and trash. This article is an invitation to study material culture and material practices as a part of semiotic and discursive analyses in order to sharpen a researcher’s analytical focus while remaining grounded in reality.

  15. Fixed and variable cost of automobiles

    DEFF Research Database (Denmark)

    Mulalic, Ismir; Rouwendal, Jan

    Recent empirical analyses have found strong reactions of car prices to changes in fuel costs. We develop a model of car quality choice to further investigate this relationship. We show that in the empirically relevant case quality characteristics increase fixed as well as variable costs, and our...... model focuses on that situation. Quality aspects must then be an argument of the utility function and we introduce them in a general way. In equilibrium the marginal willingness to pay for these aspects must be equal to their margi nal impact on fixed as well as variable costs. We estimate the marginal...

  16. Recurrent fixed drug eruption caused by citiolone.

    Science.gov (United States)

    de Barrio, M; Tornero, P; Prieto, A; Sainza, T; Zubeldia, J M; Herrero, T

    1997-01-01

    Citiolone (N-acetylhomocysteinethiolactone) is a thiolic-derived medication frequently used in Spain and in other countries as a mucolytic agent for the treatment of certain hepatic disorders. Mucolytic drugs have rarely been implicated in the fixed drug eruption etiology. We report on a patient who presented several episodes of fixed exanthema related to citiolone intake. The patch test with citiolone (10% in dimethyl sulfoxide) was negative. The diagnosis was confirmed by a positive controlled oral challenge test. Other mucolytic thiolic-derivatives (N-acetylcysteine) were tolerated by the patient, thus crossreactivity between these drugs seems to be unlikely.

  17. A Reasoning System using Inductive Inference of Analogical Union

    OpenAIRE

    Miyahara, Tetsuhiro

    1988-01-01

    Analogical reasoning derives a new fact based on the analogous facts previously known. Inductive inference is a process of gaining a general rule from examples. We propose a new reasoning system using inductive inference and analogical reasoning. which is applicable to intellectual information processing and we characterize its power. Given an enumeration of paired examples. this system inductively infers a program representing the paring and constructs an analogical union. It reasons by anal...

  18. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  19. Instance Optimality of the Adaptive Maximum Strategy

    NARCIS (Netherlands)

    L. Diening; C. Kreuzer; R. Stevenson

    2016-01-01

    In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e

  20. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  1. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  2. Maximum phonation time: variability and reliability.

    Science.gov (United States)

    Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W

    2010-05-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.

  3. Maximum Phonation Time: Variability and Reliability

    NARCIS (Netherlands)

    R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings

    2010-01-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v

  4. Analysis of Photovoltaic Maximum Power Point Trackers

    Science.gov (United States)

    Veerachary, Mummadi

    The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.

  5. Hard graphs for the maximum clique problem

    NARCIS (Netherlands)

    Hoede, Cornelis

    1988-01-01

    The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra

  6. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  7. The effect of natural selection on the performance of maximum parsimony

    Directory of Open Access Journals (Sweden)

    Ofria Charles

    2007-06-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural

  8. Functional DNA: Teaching Infinite Series through Genetic Analogy

    Science.gov (United States)

    Kowalski, R. Travis

    2011-01-01

    This article presents an extended analogy that connects infinite sequences and series to the science of genetics, by identifying power series as "DNA for a function." This analogy allows standard topics such as convergence tests or Taylor approximations to be recast in a "forensic" light as mathematical analogs of genetic concepts such as DNA…

  9. The Role of Causal Models in Analogical Inference

    Science.gov (United States)

    Lee, Hee Seung; Holyoak, Keith J.

    2008-01-01

    Computational models of analogy have assumed that the strength of an inductive inference about the target is based directly on similarity of the analogs and in particular on shared higher order relations. In contrast, work in philosophy of science suggests that analogical inference is also guided by causal models of the source and target. In 3…

  10. Analogy Use in Eighth-Grade Mathematics Classrooms

    Science.gov (United States)

    Richland, Lindsey E.; Holyoak, Keith J.; Stigler, James W.

    2004-01-01

    Analogical reasoning has long been believed to play a central role in mathematics learning and problem solving (see Genter, Holyoak, & Kokinov, 2001); however, little is known about how analogy is used in everyday instructional contexts. This article examines analogies produced in naturally occurring U.S. mathematics lessons to explore…

  11. Analogical Instruction in Statistics: Implications for Social Work Educators

    Science.gov (United States)

    Thomas, Leela

    2008-01-01

    This paper examines the use of analogies in statistics instruction. Much has been written about the difficulty social work students have with statistics. To address this concern, Glisson and Fischer (1987) called for the use of analogies. Understanding of analogical problem solving has surged in the last few decades with the integration of…

  12. Anti-inflammatory effect of thalidomide dithiocarbamate and dithioate analogs.

    Science.gov (United States)

    Talaat, Roba; El-Sayed, Waheba; Agwa, Hussein S; Gamal-Eldeen, Amira M; Moawia, Shaden; Zahran, Magdy A H

    2015-08-05

    Thalidomide has anti-inflammatory, immunomodulatory, and anti-angiogenic properties. It has been used to treat a variety of cancers and autoimmune diseases. This study aimed to characterize anti-inflammatory activities of novel thalidomide analogs by exploring their effects on splenocytes proliferation and macrophage functions and their antioxidant activity. MTT assay was used to assess the cytotoxic effect of thalidomide analogs against splenocytes. Tumor necrosis factor (TNF-α) and nuclear factor kappa B (NF-κB-P65) were determined by enzyme-linked immunosorbent assay (ELISA). Nitric oxide (NO) was estimated by colorimetric assay. Antioxidant activity was examined by ORAC assay. Our results demonstrated that thalidomide dithioate analog 2 and thalidomide dithiocarbamate analog 4 produced a slight increase in splenocyte proliferation compared with thalidomide. Thalidomide dithiocarbamate analog 1 is a potent inhibitor of TNF-α production, whereas thalidomide dithiocarbamate analog 5 is a potent inhibitor of both TNF-α and NO. Analog 2 has a pronounced inhibitory effect on NF-κB-P65 production level. All thalidomide analogs showed prooxidant activity against hydroxyl (OH) radical. Analog 1 and thalidomide dithioate analog 3 have prooxidant activity against peroxyl (ROO) radical in relation to thalidomide. On the other hand, analog 4 has a potent scavenging capacity against peroxyl (ROO) radical compared with thalidomide. Taken together, the results of this study suggest that thalidomide analogs might have valuable anti-inflammatory activities with more pronounced effect than thalidomide itself.

  13. Analogies in Medicine: Valuable for Learning, Reasoning, Remembering and Naming

    Science.gov (United States)

    Pena, Gil Patrus; Andrade-Filho, Jose de Souza

    2010-01-01

    Analogies are important tools in human reasoning and learning, for resolving problems and providing arguments, and are extensively used in medicine. Analogy and similarity involve a structural alignment or mapping between domains. This cognitive mechanism can be used to make inferences and learn new abstractions. Through analogies, we try to…

  14. Investigating and Theorizing Discourse during Analogy Writing in Chemistry

    Science.gov (United States)

    Bellocchi, Alberto; Ritchie, Stephen M.

    2011-01-01

    Explanations of the role of analogies in learning science at a cognitive level are made in terms of creating bridges between new information and students' prior knowledge. In this empirical study of learning with analogies in an 11th grade chemistry class, we explore an alternative explanation at the "social" level where analogy shapes…

  15. Sea-surface temperatures around the Australian margin and Indian Ocean during the Last Glacial Maximum

    Science.gov (United States)

    Barrows, Timothy T.; Juggins, Steve

    2005-04-01

    We present new last glacial maximum (LGM) sea-surface temperature (SST) maps for the oceans around Australia based on planktonic foraminifera assemblages. To provide the most reliable SST estimates we use the modern analog technique, the revised analog method, and artificial neural networks in conjunction with an expanded modern core top database. All three methods produce similar quality predictions and the root mean squared error of the consensus prediction (the average of the three) under cross-validation is only ±0.77 °C. We determine LGM SST using data from 165 cores, most of which have good age control from oxygen isotope stratigraphy and radiocarbon dates. The coldest SST occurred at 20,500±1400 cal yr BP, predating the maximum in oxygen isotope records at 18,200±1500 cal yr BP. During the LGM interval we observe cooling within the tropics of up to 4 °C in the eastern Indian Ocean, and mostly between 0 and 3 °C elsewhere along the equator. The high latitudes cooled by the greatest degree, a maximum of 7-9 °C in the southwest Pacific Ocean. Our maps improve substantially on previous attempts by making higher quality temperature estimates, using more cores, and improving age control.

  16. Stress tolerant crops from nitrogen fixing trees

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R.; Saunders, R.M.

    1983-01-01

    Notes are given on the nutritional quality and uses of: pods of Geoffroea decorticans, a species tolerant of saline and limed soils and saline water; seeds of Olneya tesota which nodulates readily and fixes nitrogen and photosynthesizes at low water potential; and pods of Prosopis chilensis and P. tamarugo which tolerate long periods without rain. 3 references.

  17. Generalized Common Fixed Point Results with Applications

    Directory of Open Access Journals (Sweden)

    Marwan Amin Kutbi

    2014-01-01

    Full Text Available We obtained some generalized common fixed point results in the context of complex valued metric spaces. Moreover, we proved an existence theorem for the common solution for two Urysohn integral equations. Examples are presented to support our results.

  18. Empirical Studies on Sovereign Fixed Income Markets

    NARCIS (Netherlands)

    J.G. Duyvesteyn (Johan)

    2015-01-01

    markdownabstractAbstract This dissertation presents evidence of five studies showing that sovereign fixed income markets are not always price efficient. The emerging local currency debt market has grown to a large size of more than 1.5 trill ion US Dollars at the end of 2012. The factors that

  19. Simplicial fixed point algorithms and applications

    NARCIS (Netherlands)

    Yang, Z.F.

    1996-01-01

    Fixed point theory is an important branch of modern mathematics and has always been a major theoretical tool in fields such as differential equations, topology, function analysis, optimal control, economics, and game theory. Its applicability has been increased enormously by the development of simpl

  20. Uniqueness of entire functions and fixed points

    OpenAIRE

    Chang, Jianming; Fang, Mingliang

    2002-01-01

    Let $f$ be a nonconstant entire function. %If $f(z)=z$ $\\Longleftrightarrow $ $f'(z)=z$, and %$f'(z)=z$ $\\Longrightarrow $ $f''(z)=z$, then $f\\equiv f'$. In particular, If $f$, $f'$ and $f''$ have the same fixed points, then $f\\equiv f'.$

  1. 29 CFR 1910.27 - Fixed ladders.

    Science.gov (United States)

    2010-07-01

    .... (v) The rungs of an individual-rung ladder shall be so designed that the foot cannot slide off the... OCCUPATIONAL SAFETY AND HEALTH STANDARDS Walking-Working Surfaces § 1910.27 Fixed ladders. (a) Design requirements—(1) Design considerations. All ladders, appurtenances, and fastenings shall be designed to...

  2. Efficient Fixed-Offset GPR Scattering Analysis

    DEFF Research Database (Denmark)

    Meincke, Peter; Chen, Xianyao

    2004-01-01

    in the scattering calculation the correct radiation patterns of the ground penetrating radar antennas by using their plane-wave transmitting and receiving spectra. Finally, we derive an efficient FFT-based method to analyze a fixed-offset configuration in which the location of the transmitting antenna is different...

  3. Fixing the Shadows While Moving the Gnomon

    Science.gov (United States)

    Gangui, Alejandro

    2015-01-01

    It is a common practice to fix a vertical gnomon and study the moving shadow cast by it. This shows our local solar time and gives us a hint regarding the season in which we perform the observation. The moving shadow can also tell us our latitude with high precision. In this paper we propose to exchange the roles and while keeping the shadows…

  4. A New Fixed Point Theorem and Applications

    Directory of Open Access Journals (Sweden)

    Min Fang

    2013-01-01

    Full Text Available A new fixed point theorem is established under the setting of a generalized finitely continuous topological space (GFC-space without the convexity structure. As applications, a weak KKM theorem and a minimax inequalities of Ky Fan type are also obtained under suitable conditions. Our results are different from known results in the literature.

  5. Common Fixed Points for Weakly Compatible Maps

    Indian Academy of Sciences (India)

    Renu Chugh; Sanjay Kumar

    2001-05-01

    The purpose of this paper is to prove a common fixed point theorem, from the class of compatible continuous maps to a larger class of maps having weakly compatible maps without appeal to continuity, which generalized the results of Jungck [3], Fisher [1], Kang and Kim [8], Jachymski [2], and Rhoades [9].

  6. Interactive Inconsistency Fixing in Feature Modeling

    Institute of Scientific and Technical Information of China (English)

    王波; 熊英飞; 胡振江; 赵海燕; 张伟; 梅宏

    2014-01-01

    Feature models have been widely adopted to reuse the requirements of a set of similar products in a domain. In feature models’ construction, one basic task is to ensure the consistency of feature models, which often involves detecting and fixing of inconsistencies in feature models. While many approaches have been proposed, most of them focus on detecting inconsistencies rather than fixing inconsistencies. In this paper, we propose a novel dynamic-priority based approach to interactively fixing inconsistencies in feature models, and report an implementation of a system that not only automatically recommends a solution to fixing inconsistencies but also supports domain analysts to gradually reach the desirable solution by dynamically adjusting priorities of constraints. The key technical contribution is, as far as we are aware, the first application of the constraint hierarchy theory to feature modeling, where the degree of domain analysts’ confidence on constraints is expressed by using priority and inconsistencies are resolved by deleting one or more lower-priority constraints. Two case studies demonstrate the usability and scalability (effciency) of our new approach.

  7. Force dynamics in fixed-ratio schedules.

    Science.gov (United States)

    Pinkston, Jonathan W; McBee, Lindsey N

    2014-03-01

    Fixed-ratio schedules are widely used in behavioral research. Although fixed-ratio schedules often conjure up relationships to work and effort, little is known about effort-related measures in these schedules. Early research had shown that force and effort of operant behavior vary systematically during the execution of ratio schedules, and the goal of the present study was to revisit early research on force dynamics in fixed-ratio schedules. Four rats earned sucrose by pressing an isometric force transducer. Presses produced sucrose after ten or twenty responses. In general, the force of responses increased then decreased systematically across the ratio. The possibility that decreases in force during ratio execution was due to a trade-off with the differential reinforcement of short inter-response times (IRT) was investigated in an additional condition where sucrose was made available according to a tandem fixed-ratio 19 inter-response (IRT)> t schedule. The tandem IRT requirement did not eliminate decreasing trends in force across the ratio; unexpectedly, the tandem requirement did eliminate increases in force early in the ratio, which may reflect sequence-level organization operating in the control of force dynamics.

  8. A Dynamical Approach to Gauge Fixing

    CERN Document Server

    Loran, F

    2002-01-01

    We study gauge fixing in the generalized Gupta-Bleuler quantization. In this method physical states are defined to be simultaneous null eigenstates of a set of quantum invariants. We apply the method to a solvable model proposed by Friedberg, Lee, Pang and Ren and show that no Gribov-type copies appears by construction.

  9. Untangling Fixed Effects and Constant Regressors

    NARCIS (Netherlands)

    Klaassen, F.; Teulings, R.

    2015-01-01

    Fixed effects (FE) in panel data models overlap each other and prohibit the identification of the impact of "constant" regressors. Think of regressors that are constant across countries in a country-time panel with time FE. The traditional approach is to drop some FE and constant regressors by norma

  10. Some Generalizations of Jungck's Fixed Point Theorem

    Directory of Open Access Journals (Sweden)

    J. R. Morales

    2012-01-01

    Full Text Available We are going to generalize the Jungck's fixed point theorem for commuting mappings by mean of the concepts of altering distance functions and compatible pair of mappings, as well as, by using contractive inequalities of integral type and contractive inequalities depending on another function.

  11. Tunnel Diode Discriminator with Fixed Dead Time

    DEFF Research Database (Denmark)

    Diamond, J. M.

    1965-01-01

    A solid state discriminator for the range 0.4 to 10 V is described. Tunnel diodes are used for the discriminator element and in a special fixed dead time circuit. An analysis of temperature stability is presented. The regulated power supplies are described, including a special negative resistance...

  12. Fixed points and self-reference

    Directory of Open Access Journals (Sweden)

    Raymond M. Smullyan

    1984-01-01

    Full Text Available It is shown how Gödel's famous diagonal argument and a generalization of the recursion theorem are derivable from a common construation. The abstract fixed point theorem of this article is independent of both metamathematics and recursion theory and is perfectly comprehensible to the non-specialist.

  13. Inviting Argument by Analogy: Analogical-Mapping-Based Comparison Activities as a Scaffold for Small-Group Argumentation

    Science.gov (United States)

    Emig, Brandon R.; McDonald, Scott; Zembal-Saul, Carla; Strauss, Susan G.

    2014-01-01

    This study invited small groups to make several arguments by analogy about simple machines. Groups were first provided training on analogical (structure) mapping and were then invited to use analogical mapping as a scaffold to make arguments. In making these arguments, groups were asked to consider three simple machines: two machines that they had…

  14. Fracture Characteristics Analysis of Double-layer Rock Plates with Both Ends Fixed Condition

    Directory of Open Access Journals (Sweden)

    S. R. Wang

    2014-07-01

    Full Text Available In order to research on the fracture and instability characteristics of double-layer rock plates with both ends fixed, the three-dimension computational model of double-layer rock plates under the concentrated load was built by using PFC3D technique (three-dimension particle flow code, and the mechanical parameters of the numerical model were determined based on the physical model tests. The results showed the instability process of the double-layer rock plates had four mechanical response phases: the elastic deformation stage, the brittle fracture of upper thick plate arching stage, two rock-arch bearing stage and two rock-arch failure stage; moreover, with the rock plate particle radius from small to large change, the maximum vertical force of double rock-arch appeared when the particle size was a certain value. The maximum vertical force showed an upward trend with the increase of the rock plate temperature, and in the case of the same thickness the maximum vertical force increased with the increase of the upper rock plate thickness. When the boundary conditions of double-layer rock plates changed from the hinged support to the fixed support, the maximum horizontal force observably decreased, and the maximum vertical force showed small fluctuations and then tended towards stability with the increase of cohesive strength of double-layer rock plates.

  15. 46 CFR 28.260 - Electronic position fixing devices.

    Science.gov (United States)

    2010-10-01

    ... Trade § 28.260 Electronic position fixing devices. Each vessel 79 feet (24 meters) or more in length must be equipped with an electronic position fixing device capable of providing accurate fixes for the... 46 Shipping 1 2010-10-01 2010-10-01 false Electronic position fixing devices. 28.260 Section...

  16. 46 CFR 118.410 - Fixed gas fire extinguishing systems.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Fixed gas fire extinguishing systems. 118.410 Section... PROTECTION EQUIPMENT Fixed Fire Extinguishing and Detecting Systems § 118.410 Fixed gas fire extinguishing systems. (a) General. (1) A fixed gas fire extinguishing system aboard a vessel must be approved by...

  17. 46 CFR 181.410 - Fixed gas fire extinguishing systems.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Fixed gas fire extinguishing systems. 181.410 Section... Fixed gas fire extinguishing systems. (a) General. (1) A fixed gas fire extinguishing system aboard a... approved for the system by the Commandant. (4) A fixed gas fire extinguishing system may protect more...

  18. 46 CFR 184.410 - Electronic position fixing devices.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Electronic position fixing devices. 184.410 Section 184... Electronic position fixing devices. A vessel on an oceans route must be equipped with an electronic position fixing device, capable of providing accurate fixes for the area in which the vessel operates, to...

  19. Development of Anthropometric Analogous Headforms. Phase 1.

    Science.gov (United States)

    1994-10-31

    to treat the key dimensions as random variables. Recall the definition of a continuous random variable X over the sample space 0• x < 1: the... recall that for joint distributions of several random variables, single integrals become multiple integrals of corresponding dimensions. To fix...Warrendale, Pa. 1606 p. 1985-1991. Pp. 4.34.218-4.34.223. 1993. Guidelines for Evaluating Child Restraint System Interactions with Deploying Airbags

  20. SSERVI Analog Regolith Simulant Testbed Facility

    Science.gov (United States)

    Minafra, Joseph; Schmidt, Gregory; Bailey, Brad; Gibbs, Kristina

    2016-10-01

    The Solar System Exploration Research Virtual Institute (SSERVI) at NASA's Ames Research Center in California's Silicon Valley was founded in 2013 to act as a virtual institute that provides interdisciplinary research centered on the goals of its supporting directorates: NASA Science Mission Directorate (SMD) and the Human Exploration & Operations Mission Directorate (HEOMD).Primary research goals of the Institute revolve around the integration of science and exploration to gain knowledge required for the future of human space exploration beyond low Earth orbit. SSERVI intends to leverage existing JSC1A regolith simulant resources into the creation of a regolith simulant testbed facility. The purpose of this testbed concept is to provide the planetary exploration community with a readily available capability to test hardware and conduct research in a large simulant environment.SSERVI's goals include supporting planetary researchers within NASA, other government agencies; private sector and hardware developers; competitors in focused prize design competitions; and academic sector researchers.SSERVI provides opportunities for research scientists and engineers to study the effects of regolith analog testbed research in the planetary exploration field. This capability is essential to help to understand the basic effects of continued long-term exposure to a simulated analog test environment.The current facility houses approximately eight tons of JSC-1A lunar regolith simulant in a test bin consisting of a 4 meter by 4 meter area, including dust mitigation and safety oversight.Facility hardware and environment testing scenarios could include, Lunar surface mobility, Dust exposure and mitigation, Regolith handling and excavation, Solar-like illumination, Lunar surface compaction profile, Lofted dust, Mechanical properties of lunar regolith, Surface features (i.e. grades and rocks)Numerous benefits vary from easy access to a controlled analog regolith simulant testbed, and

  1. Nonparametric Maximum Entropy Estimation on Information Diagrams

    CERN Document Server

    Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn

    2016-01-01

    Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...

  2. Zipf's law, power laws, and maximum entropy

    CERN Document Server

    Visser, Matt

    2012-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  3. Zipf's law, power laws and maximum entropy

    Science.gov (United States)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  4. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  5. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  6. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  7. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  8. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  9. Modelling and simulation of load connected fixed blade wind turbine with permanent magnet synchronous generators

    OpenAIRE

    Al-Toma, AS; Taylor, GA; Abbod, M

    2015-01-01

    This paper presents the modelling and simulation of a wind turbine driven Permanent Magnet Synchronous Generator connected to a load. The system has been tested at different wind speeds. The machine side controller has been designed to match Maximum Power Point Tracking (MPPT) to obtain high extraction of wind power when connected to a load, while the load side controller fixes the DC voltage that is converted to the AC load voltage. Detailed plots of voltage and current profiles are also pre...

  10. Maximum likelihood tuning of a vehicle motion filter

    Science.gov (United States)

    Trankle, Thomas L.; Rabin, Uri H.

    1990-01-01

    This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.

  11. The maximum contribution to reionization from metal-free stars

    CERN Document Server

    Rozas, J M; Salvador-Solé, E; Rozas, Jose M.; Miralda-Escude, Jordi; Salvador-Sole, Eduard

    2005-01-01

    We estimate the maximum contribution to reionization from the first generation of massive stars, with zero metallicity, under the assumption that one of these stars forms with a fixed mass in every collapsed halo in which metal-free gas is able to cool. We assume that any halo that has already had stars previously formed in one of their halo progenitors will form only stars with metals, which are assigned an emissivity of ionizing radiation equal to that determined at z=4 from the measured intensity of the ionizing background. We examine the impact of molecular hydrogen photodissociation (which tends to reduce cooling when a photodissociating background is produced by the first stars) and X-Ray photoheating (which heats the atomic medium, raising the entropy of the gas before it collapses into halos). We find that in the CDM$\\Lambda$ model supported by present observations, and even assuming no negative feedbacks for the formation of metal-free stars, a reionized mass fraction of 50% is not reached until reds...

  12. Maximum privacy without coherence, zero-error

    Science.gov (United States)

    Leung, Debbie; Yu, Nengkun

    2016-09-01

    We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.

  13. Tissue radiation response with maximum Tsallis entropy.

    Science.gov (United States)

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  14. A stochastic maximum principle via Malliavin calculus

    OpenAIRE

    Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo

    2008-01-01

    This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.

  15. Integrated Analogic Filter Tuning System Design

    Directory of Open Access Journals (Sweden)

    Karolis Kiela

    2016-06-01

    Full Text Available Parameters of integrated analog filters can vary due to temperatu-re change, IC process variation and therefore they should have dedicated tuning circuits that compensate these imperfections. A method is proposed that speeds up switched resistor bank design while taking into account the required tuning range and step size. A novel counter structure is used in the tuning circuit that is ba-sed on successive approximation approach. The proposed swit-ched resistor design method and tuning circuit are designed in 0.18 μm CMOS technology and verified. Results are compared to existing tuning circuit designs.

  16. Reconfiguration of Analog Electronics for Extreme Environments

    Science.gov (United States)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Guo, Xin

    2005-01-01

    This paper argues in favor of adaptive reconfiguration as a technique to expand the operational envelope of analog electronics for extreme environments (EE). On a reconfigurable device, although component parameters change in EE, as long as devices still operate, albeit degraded, a new circuit design, suitable for new parameter values, may be mapped into the reconfigurable structure to recover the initial circuit function. Laboratory demonstrations of this technique were performed by JPL in several independent experiments in which bulk CMOS reconfgurable devices were exposed to, and degraded by, high temperatures (approx.300 C) or radiation (300kRad TID), and then recovered by adaptive reconfiguration using evolutionary search algorithms.

  17. An Analog VLSI Saccadic Eye Movement System

    OpenAIRE

    1994-01-01

    In an effort to understand saccadic eye movements and their relation to visual attention and other forms of eye movements, we - in collaboration with a number of other laboratories - are carrying out a large-scale effort to design and build a complete primate oculomotor system using analog CMOS VLSI technology. Using this technology, a low power, compact, multi-chip system has been built which works in real-time using real-world visual inputs. We describe in this paper the performance of a...

  18. Microstore: the Stanford analog memory unit

    Energy Technology Data Exchange (ETDEWEB)

    Walker, J.T.; Chae, S.I.; Shapiro, S.; Larsen, R.S.

    1984-11-01

    An NMOS device has been developed which provides high speed analog signal storage and readout for time expansion of transient signals. This device takes advantage of HMOS-1 VLSI technology to implement an array of 256 storage cells. Sequential samples of an input waveform can be taken every 5 ns while providing an effective sampling aperture time of less than 1 ns. The design signal-to-noise ratio is 1 part in 2000. Digital control circuitry is provided on the chip for controlling the read-in and read-out processes. A reference circuit is incorporated in the chip for first order compensation of leakage drifts, sampling pedestals, and temperature effects.

  19. Bootstrapped Low-Voltage Analog Switches

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jesper

    1999-01-01

    Novel low-voltage constant-impedance analog switch circuits are proposed. The switch element is a single MOSFET, and constant-impedance operation is obtained using simple circuits to adjust the gate and bulk voltages relative to the switched signal. Low-voltage (1-volt) operation is made feasible...... by employing a feedback loop. The gate oxide will not be subject to voltages exceeding the supply voltage difference.Realistic switches have been simulated with HSPICE. The simulations show that the switch circuits operate very well, even when the supply voltage approaches the technology's threshold voltage....

  20. Analogy among microfluidics, micromechanics, and microelectronics.

    Science.gov (United States)

    Li, Sheng-Shian; Cheng, Chao-Min

    2013-10-07

    We wish to illuminate the analogous link between microfluidic-based devices, and the already established pairing of micromechanics and microelectronics to create a triangular/three-way scientific relationship as a means of interlinking familial disciplines and accomplishing two primary goals: (1) to facilitate the modeling of multidisciplinary domains; and, (2) to enable us to co-simulate the entire system within a compact circuit simulator (e.g., Cadence or SPICE). A microfluidic channel-like structure embedded in a micro-electro-mechanical resonator via our proposed CMOS-MEMS technology is used to illustrate the connections among microfluidics, micromechanics, and microelectronics.

  1. Novel Gemini vitamin D3 analogs

    DEFF Research Database (Denmark)

    Okamoto, Ryoko; Gery, Sigal; Kuwayama, Yoshio

    2014-01-01

    anticancer potency, but similar toxicity causing hypercalcemia. We focused on the effect of these compounds on the stimulation of expression of human cathelicidin antimicrobial peptide (CAMP) whose gene has a vitamin D response element in its promoter. Expression of CAMP mRNA and protein increased in a dose......-response fashion after exposure of acute myeloid leukemia (AML) cells to the Gemini analog, BXL-01-126, in vitro. A xenograft model of AML was developed using U937 AML cells injected into NSG-immunodeficient mice. Administration of vitamin D3 compounds to these mice resulted in substantial levels of CAMP...

  2. An optical analog of quantum optomechanics

    CERN Document Server

    Rodríguez-Lara, B M

    2014-01-01

    We present a two-dimensional array of nearest-neighbor coupled waveguides that is the optical analog of a quantum optomechanical system. We show that the quantum model predicts the appearance of effective column isolation, diagonal-coupling and other non-trivial couplings in the two-dimensional photonic lattice under a standard approximation from ion-trap cavity electrodynamics. We provide an approximate impulse function for the case of effective column isolation and compare it with exact numerical propagation in the photonic lattice.

  3. Analogic China map constructed by DNA

    Institute of Scientific and Technical Information of China (English)

    QIAN Lulu; HE Lin; WANG Ying; ZHANG Zhao; ZHAO Jian; PAN Dun; ZHANG Yi; LIU Qiang; FAN Chunhai; HU Jun

    2006-01-01

    In this research, a nanoscale DNA structure of analogic China map is created. The nanostructure of roughly 150 nm in diameter with a spatial resolution of 6 nm is purely constructed by folding DNA. The picture observed by atomic force microscopy (AFM) is almost identical with the designed shape. The DNA origami technology invented by Rothemund in 2006 is employed in the construction of this shape, which has proved the capability of constructing almost any complicated shape enabled by DNA origami, and provides new bottom-up method for constructing nanostructures.

  4. Analogies as categorization phenomena: Studies from scientific discourse

    Science.gov (United States)

    Atkins, Leslie Jill

    Studies on the role of analogies in science classrooms have tended to focus on analogies that come from the teacher or curriculum, and not the analogies that students generate. Such studies are derivative of an educational system that values content knowledge over scientific creativity, and derivative of a model of teaching in which the teacher's role is to convey content knowledge. This dissertation begins with the contention that science classrooms should encourage scientific thinking and one role of the teacher is to model that behavior and identify and encourage it in her students. One element of scientific thinking is analogy. This dissertation focuses on student-generated analogies in science, and offers a model for understanding these. I provide evidence that generated analogies are assertions of categorization, and the base of an analogy is the constructed prototype of an ad hoc category. Drawing from research on categorization, I argue that generated analogies are based in schemas and cognitive models. This model allows for a clear distinction between analogy and literal similarity; prior to this research analogy has been considered to exist on a spectrum of similarity, differing from literal similarity to the degree that structural relations hold but features do not. I argue for a definition in which generated analogies are an assertion of an unexpected categorization: that is, they are asserted as contradictions to an expected schema.

  5. Analogies in medicine: valuable for learning, reasoning, remembering and naming.

    Science.gov (United States)

    Pena, Gil Patrus; Andrade-Filho, José de Souza

    2010-10-01

    Analogies are important tools in human reasoning and learning, for resolving problems and providing arguments, and are extensively used in medicine. Analogy and similarity involve a structural alignment or mapping between domains. This cognitive mechanism can be used to make inferences and learn new abstractions. Through analogies, we try to explain a knowledge to be achieved (the target), with pieces of information already familiar to the learner (the source), keeping in mind the constraints of similarity, structure and purpose. The purpose of this essay is to examine the use of analogies in medicine. We provide a brief review of the theoretical basis of analogical reasoning. This theoretical background is discussed in the light of actual examples of analogies retrieved from medical literature. In medicine, analogies have long been used to explain several physiologic and pathologic processes. Besides deeper structural relations, superficial attribute similarity is extensively used in many medical specialties. These attribute similarities are important in naming, categorizing and classifying, and play a role as learning and memorizing tools. Analogies also serve as basis for medical nomenclature. The choice of the source of analogies is highly dependent on cultural background, and may derive from ancient or diverse cultures. Learning by analogies may thus require research on culture diversity in order to establish an adequate justification and to comprehend the purpose of an analogy.

  6. Maximum-biomass prediction of homofermentative Lactobacillus.

    Science.gov (United States)

    Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei

    2016-07-01

    Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.

  7. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  8. The maximum rate of mammal evolution

    Science.gov (United States)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-03-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.

  9. The maximum rate of mammal evolution

    Science.gov (United States)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-01-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461

  10. Minimal Length, Friedmann Equations and Maximum Density

    CERN Document Server

    Awad, Adel

    2014-01-01

    Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...

  11. Automatic maximum entropy spectral reconstruction in NMR.

    Science.gov (United States)

    Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C

    2007-10-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.

  12. Maximum entropy analysis of cosmic ray composition

    CERN Document Server

    Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana

    2016-01-01

    We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...

  13. Maximum saliency bias in binocular fusion

    Science.gov (United States)

    Lu, Yuhao; Stafford, Tom; Fox, Charles

    2016-07-01

    Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.

  14. ANALYTICAL SOLUTION FOR FIXED-FIXED ANISOTROPIC BEAM SUBJECTED TO UNIFORM LOAD

    Institute of Scientific and Technical Information of China (English)

    DING Hao-jiang; HUANG De-jin; WANG Hui-ming

    2006-01-01

    The analytical solutions of the stresses and displacements were obtained for fixed-fixed anisotropic beams subjected to uniform load. A stress function involving unknown coefficients was constructed, and the general expressions of stress and displacement were obtained by means of Airy stress function method. Two types of the description for the fixed end boundary condition were considered. The introduced unknown coefficients in stress function were determined by using the boundary conditions. The analytical solutions for stresses and displacements were finally obtained. Numerical tests show that the analytical solutions agree with the FEM results. The analytical solution supplies a classical example for the elasticity theory.

  15. Unconscious violinists and the use of analogies in moral argument.

    Science.gov (United States)

    Wiland, E

    2000-12-01

    Analogies are the stuff out of which normative moral philosophy is made. Certainly one of the most famous analogies constructed by a philosopher in order to argue for a specific controversial moral conclusion is the one involving Judith Thomson's unconscious violinist. Reflection upon this analogy is meant to show us that abortion is generally not immoral even if the prenatal have the same moral status as the postnatal. This was and still is a controversial conclusion, and yet the analogy does seem to reveal in a very vivid way what makes abortion a reasonable response to a terrible situation. But Thomson's example has frequently been attacked on all sides for not being truly analogous to abortion. Here I develop a brand new analogy that sheds light on the issue with which Thomson was concerned, while at the same time avoiding most of the more serious objections made to her analogy.

  16. Continuous-time analog filter in CMOS nanoscale era

    Science.gov (United States)

    Baschirotto, A.; De Matteis, M.; Pezzotta, A.; D'Amico, S.

    2014-04-01

    Analog filters are key blocks in analog signal processing. They are widely employed in many systems, like wireless transceivers, detectors read-out, sensors interfaces, etc. The IC technology choice for such systems is mainly dictated by the requirements of high speed and low power consumption of digital circuits. This pushed an impressive movement towards scaled technology and this has important consequences on the analog circuits design. The impact of technology scaling down to nanometre scale on analog filters design is here investigated. For instance, supply voltage reduction in analog filters limits circuits design solutions and could result in higher power consumption. Moreover, at the same time, innovative systems push analog filters to get higher and higher operation frequencies, due to the increasing bandwidth/speed requirements. Recent solutions for efficient low-voltage and high frequency analog filters in nanometre technology are described.

  17. Use of analogy in learning physics: The role of representations

    Directory of Open Access Journals (Sweden)

    Noah D. Finkelstein

    2006-07-01

    Full Text Available Previous studies have demonstrated that analogies can promote student learning in physics and can be productively taught to students to support their learning, under certain conditions. We build on these studies to explore the use of analogy by students in a large introductory college physics course. In the first large-scale study of its kind, we demonstrate that different analogies can lead to varied student reasoning. When different analogies were used to teach electromagnetic (EM waves, we found that students explicitly mapped characteristics either of waves on strings or sound waves to EM waves, depending upon which analogy students were taught. We extend these results by investigating how students use analogies. Our findings suggest that representational format plays a key role in the use of analogy.

  18. Seeking Maximum Linearity of Transfer Functions

    CERN Document Server

    Silva, Filipi N; Costa, Luciano da F

    2016-01-01

    Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function, identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and real-world (low-power, one-stage class A transistor amplifier) situations. In the former case, the method was found to identity the theoretically optimal region of operation even in presence of noise. In the latter case, it was possible to identify an amplifier circuit configuration providing a good compromise between linearity, amplification and output resistance. The transistor amplifier application, which was addressed in terms of transfer functions derived from its experimentally obtained characteristic surface, also yielded contributions such as the estimation of local cons...

  19. Seeking maximum linearity of transfer functions

    Science.gov (United States)

    Silva, Filipi N.; Comin, Cesar H.; Costa, Luciano da F.

    2016-12-01

    Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function (theoretical or derived from some real system), identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and a simple situation involving experimental data of a low-power, one-stage class A transistor current amplifier. Such an approach, which has been addressed in terms of transfer functions derived from experimentally obtained characteristic surface, also yielded contributions such as the estimation of local constants of the device, as opposed to typically considered average values. The reported method and results pave the way to several further applications in other types of devices and systems, intelligent control operation, and other areas such as identifying regions of power law behavior.

  20. Experimental demonstration of improved analog device performance of nanowire-TFETs

    Science.gov (United States)

    Schulte-Braucks, Christian; Richter, Simon; Knoll, Lars; Selmi, Luca; Zhao, Qing-Tai; Mantl, Siegfried

    2015-11-01

    We present experimental data on analog device performance of p-type planar- and gate all around (GAA) nanowire (NW) Tunnel-FETs (TFETs) as well as on n-type Tri-Gate-TFETs. A significant improvement of the analog performance by enhancing the electrostatics from planar TFETs to GAA-NW-TFETs with diameters of 20 nm and 10 nm is demonstrated. A maximum transconductance of 122 μS/μm and on-currents up to 23 μA/μm at a gate overdrive of Vgt = Vd = -1 V were achieved for the GAA NW-pTFETs. Furthermore, a good output current-saturation is observed leading to high intrinsic gain up to 217. The Tri-Gate nTFETs beat the fundamental MOSFET limit for the subthreshold slope of 60 mV/dec and by that also reach extremely high transconductance efficiencies up to 82 V-1.

  1. Efficient Circuit Configuration for Enhancing Resolution of 8-bit flash Analog to Digital Convertor

    Directory of Open Access Journals (Sweden)

    Gururaj Balikatti

    2012-11-01

    Full Text Available The need constantly exists for converters with higher resolution, faster conversion speeds and lower power dissipation. High speed analog to digital converters (ADCs have been based on flash architecture, because all comparators sample the analog input voltage simultaneously, this ADC is thus inherently fast. Unfortunately flash ADC requires 2N - 1 comparators to convert N bit digital code from an analog sample. This makes flash ADCs unsuitable for high resolution applications. The focus of this paper is on efficient circuit configuration to enhance resolution of available 8-bit flash ADC, while maintaining number of comparators only 256 for 12 bit conversion. This technique optimizes the number of comparator requirements. In this approach, an 8-bit flash ADC partitions the analog input range into 256 quantization cells, separated by 255 boundary points. An 8-bit binary code 00000000 to 11111111 is assigned to each cell. The Microcontroller decides within which cell the input sample lies and assigns a 12-bit binary center code 000000000000 to 111111111111 according to the cell value. The exact 12-bit digital code is obtained by successive approximation technique. In this paper the focus will be on all-around efficient circuit for enhancing resolution of 8-bit Flash ADC. It is shown that by adopting this configuration, we can obtain 12-bit digital data just using 256 comparators. Therefore this technique is best suitable when high speed combined with high resolution is required. An experimental prototype of proposed 12-bit ADC was implemented using Philips P89V51RD2BN Microcontroller. Use of Microcontroller has greatly reduced the hardware requirement and cost. An ADC result of 12-bit prototype is presented. The results show that the ADC exhibits a maximum DNL of 0.52LSB and a maximum INL of 0.55LSB.

  2. The gravitational analog of Faraday's induction law

    Science.gov (United States)

    Zile, Daniel; Overduin, James

    2015-04-01

    Michael Faraday, the discoverer of electromagnetic induction, was convinced that there must also be a gravitational analog of this law, and he carried out drop-tower experiments in 1849 to look for the electric current induced in a coil by changes in gravitational flux through the coil. This work, now little remembered, was in some ways the first investigation of what we would now call a unified-field theory. We revisit Faraday's experiments in the light of current knowledge and ask what might be learned if they were to be performed today. We then review the gravitational analog for Faraday's law that arises within the vector (or gravito-electromagnetic) approximation to Einstein's theory of general relativity in the weak-field, low-velocity limit. This law relates spinning masses and induced ``mass currents'' rather than spinning charges and electric currents, but is otherwise remarkably similar to its electromagnetic counterpart. The predicted effects are completely unobservable in everyday settings like those envisioned by Faraday, but are thought to be relevant in astrophysical contexts like the accretion disks around collapsed stars, thus bearing out Faraday's remarkable intuition. Undergraduate student.

  3. Developing Fluorescent Hyaluronan Analogs for Hyaluronan Studies

    Directory of Open Access Journals (Sweden)

    Shi Ke

    2012-02-01

    Full Text Available Two kinds of fluorescent hyaluronan (HA analogs, one serving as normal imaging agent and the other used as a biosensitive contrast agent, were developed for the investigation of HA uptake and degradation. Our approach of developing HA imaging agents depends on labeling HA with varying molar percentages of a near-infrared (NIR dye. At low labeling ratios, the hyaluronan uptake can be directly imaged while at high labeling ratios, the fluorescent signal is quenched and signal generation occurs only after degradation. It is found that the conjugate containing 1%–2% NIR dye can be used as a normal optical imaging agent, while bioactivable imaging agents are formed at 6% to 17% dye loading. It was determined that the conjugation of dye to HA with different loading percentages does not impact HA biodegradation by hyaluronidase (Hyal. The feasibility of using these two NIR fluorescent hyaluronan analogs for HA investigation was evaluated in vivo with optical imaging. The data demonstrates that the 1% dye loaded fluorescent HA can be used to monitor the behavior of HA and its fragments, whereas bioactivatable HA imaging agent (17% dye in HA is more suitable for detecting HA fragments.

  4. Digital and analog gene circuits for biotechnology.

    Science.gov (United States)

    Roquet, Nathaniel; Lu, Timothy K

    2014-05-01

    Biotechnology offers the promise of valuable chemical production via microbial processing of renewable and inexpensive substrates. Thus far, static metabolic engineering strategies have enabled this field to advance industrial applications. However, the industrial scaling of statically engineered microbes inevitably creates inefficiencies due to variable conditions present in large-scale microbial cultures. Synthetic gene circuits that dynamically sense and regulate different molecules can resolve this issue by enabling cells to continuously adapt to variable conditions. These circuits also have the potential to enable next-generation production programs capable of autonomous transitioning between steps in a bioprocess. Here, we review the design and application of two main classes of dynamic gene circuits, digital and analog, for biotechnology. Within the context of these classes, we also discuss the potential benefits of digital-analog interconversion, memory, and multi-signal integration. Though synthetic gene circuits have largely been applied for cellular computation to date, we envision that utilizing them in biotechnology will enhance the efficiency and scope of biochemical production with living cells.

  5. Antarctic analog for dilational bands on Europa

    Science.gov (United States)

    Hurford, T. A.; Brunt, K. M.

    2014-09-01

    Europa's surface shows signs of extension, which is revealed as lithospheric dilation expressed along ridges, dilational bands and ridged bands. Ridges, the most common tectonic feature on Europa, comprise a central crack flanked by two raised banks a few hundred meters high on each side. Together these three classes may represent a continuum of formation. In Tufts' Dilational Model ridge formation is dominated by daily tidal cycling of a crack, which can be superimposed with regional secular dilation. The two sources of dilation can combine to form the various band morphologies observed. New GPS data along a rift on the Ross Ice Shelf, Antarctica is a suitable Earth analog to test the framework of Tufts' Dilational Model. As predicted by Tufts' Dilational Model, tensile failures in the Ross Ice Shelf exhibit secular dilation, upon which a tidal signal can be seen. From this analog we conclude that Tufts' Dilational Model for Europan ridges and bands may be credible and that the secular dilation is most likely from a regional source and not tidally driven.

  6. Amygdalin analogs for the treatment of psoriasis.

    Science.gov (United States)

    Perez, Juan J

    2013-05-01

    Psoriasis is one of the most prevalent immune-mediated illness worldwide. The disease can still only be managed rather than cured, so treatments are aimed at clearing skin lesions and preventing their recurrence. Several treatments are available depending on the extent of the psoriatic lesion. Among the topical treatments corticostereoids, vitamin D3 analogs and retinoids are commonly used. However, these treatments may have adverse effects in the long term. Conversely, systemic conventional treatments include immunosuppresors such as cyclosporin or methotrexate associated with high toxicity levels. Biologicals are alternative therapeutical agents introduced in the last 10 years. These include fusion proteins or monoclonal antibodies designed to inhibit the action of specific cytokines or to prevent T-lymphocyte activation. However, due to recent knowledge on the etiology of the disease, diverse new small molecules have appeared as promising alternatives for the treatment of psoriasis. Among them, inhibitors of JAK3, inhibitors of PDE 4 and amygdalin analogs. The latter are promising small molecules presently in preclinical studies which are the object of the present report.

  7. Efficient Analog Circuits for Boolean Satisfiability

    CERN Document Server

    Yin, Xunzhao; Varga, Melinda; Ercsey-Ravasz, Maria; Toroczkai, Zoltan; Hu, Xiaobo Sharon

    2016-01-01

    Efficient solutions to NP-complete problems would significantly benefit both science and industry. However, such problems are intractable on digital computers based on the von Neumann architecture, thus creating the need for alternative solutions to tackle such problems. Recently, a deterministic, continuous-time dynamical system (CTDS) was proposed (Nature Physics, 7(12), 966 (2011)) to solve a representative NP-complete problem, Boolean Satisfiability (SAT). This solver shows polynomial analog time-complexity on even the hardest benchmark $k$-SAT ($k \\geq 3$) formulas, but at an energy cost through exponentially driven auxiliary variables. With some modifications to the CTDS equations, here we present a novel analog hardware SAT solver, AC-SAT, implementing the CTDS. AC-SAT is intended to be used as a co-processor, and with its modular design can be readily extended to different problem sizes. The circuit is designed and simulated based on a 32nm CMOS technology. SPICE simulation results show speedup factor...

  8. Analog IC reliability in nanometer CMOS

    CERN Document Server

    Maricau, Elie

    2013-01-01

    This book focuses on modeling, simulation and analysis of analog circuit aging. First, all important nanometer CMOS physical effects resulting in circuit unreliability are reviewed. Then, transistor aging compact models for circuit simulation are discussed and several methods for efficient circuit reliability simulation are explained and compared. Ultimately, the impact of transistor aging on analog circuits is studied. Aging-resilient and aging-immune circuits are identified and the impact of technology scaling is discussed.   The models and simulation techniques described in the book are intended as an aid for device engineers, circuit designers and the EDA community to understand and to mitigate the impact of aging effects on nanometer CMOS ICs.   ·         Enables readers to understand long-term reliability of an integrated circuit; ·         Reviews CMOS unreliability effects, with focus on those that will emerge in future CMOS nodes; ·         Provides overview of models for...

  9. C-Glycosyl Analogs of Oligosaccharides

    Science.gov (United States)

    Vauzeilles, Boris; Urban, Dominique; Doisneau, Gilles; Beau, Jean-Marie

    This chapter covers the synthesis of a large collection of "C-oligosaccharides ", synthetic analogs of naturally occurring oligosaccharides in which a carbon atom replaces the anomeric, interglycosidic oxygen atom. These non-natural constructs are stable to chemical and enzymatic degradation, and are primarily devised to probe carbohydrate-based biological processes. These mainly target carbohydrate-protein interactions such as the modulation of glycoenzyme (glycosylhydrolases and transferases) activities or the design of ligands for lectin Carbohydrate Recognition Domains. The discussion is based on the key carbon-carbon bond assembling steps on carbohydrate templates: ionic (anionic and cationic chemistries, sigmatropic rearrangements) or radical assemblage, and olefin metathesis. Synthetic schemes in which at least one of the monosaccharide units is constructed by total synthesis or by cyclization of acyclic chiral chains are presented separately in a "partial de novo synthesis" section. The review also provides comments, when they are known, on the conformational and binding properties of these synthetic analogs, as well as their biological behavior when tested.

  10. Photonic analog-to-digital converters

    Science.gov (United States)

    Valley, George C.

    2007-03-01

    This paper reviews over 30 years of work on photonic analog-to-digital converters. The review is limited to systems in which the input is a radio-frequency (RF) signal in the electronic domain and the output is a digital version of that signal also in the electronic domain, and thus the review excludes photonic systems directed towards digitizing images or optical communication signals. The state of the art in electronic ADCs, basic properties of ADCs and properties of analog optical links, which are found in many photonic ADCs, are reviewed as background information for understanding photonic ADCs. Then four classes of photonic ADCs are reviewed: 1) photonic assisted ADC in which a photonic device is added to an electronic ADC to improve performance, 2) photonic sampling and electronic quantizing ADC, 3) electronic sampling and photonic quantizing ADC, and 4) photonic sampling and quantizing ADC. It is noted, however, that all 4 classes of “photonic ADC” require some electronic sampling and quantization. After reviewing all known photonic ADCs in the four classes, the review concludes with a discussion of the potential for photonic ADCs in the future.

  11. Metaphor and analogy in everyday problem solving.

    Science.gov (United States)

    Keefer, Lucas A; Landau, Mark J

    2016-11-01

    Early accounts of problem solving focused on the ways people represent information directly related to target problems and possible solutions. Subsequent theory and research point to the role of peripheral influences such as heuristics and bodily states. We discuss how metaphor and analogy similarly influence stages of everyday problem solving: Both processes mentally map features of a target problem onto the structure of a relatively more familiar concept. When individuals apply this structure, they use a well-known concept as a framework for reasoning about real world problems and candidate solutions. Early studies found that analogy use helped people gain insight into novel problems. More recent research on metaphor goes further to show that activating mappings has subtle, sometimes surprising effects on judgment and reasoning in everyday problem solving. These findings highlight situations in which mappings can help or hinder efforts to solve problems. WIREs Cogn Sci 2016, 7:394-405. doi: 10.1002/wcs.1407 For further resources related to this article, please visit the WIREs website.

  12. Metastatic Insulinoma Managed with Radiolabeled Somatostatin Analog

    Science.gov (United States)

    Costa, Ricardo; Bacchi, Carlos E.; Almeida Filho, Paulo

    2013-01-01

    Insulinoma is a rare pancreatic neuroendocrine tumor. Overproduction of insulin and associated hypoglycemia are hallmark features of this disease. Diagnosis can be made through demonstration of hypoglycemia and elevated plasma levels of insulin or C-Peptide. Metastatic disease can be detected through computerized tomography (CT) scans, magnetic resonance imaging (MRI), and positron emission tomography (PET)/CT. Somatostatin receptor scintigraphy can be used not only to document metastatic disease but also as a predictive marker of the benefit from therapy with radiolabeled somatostatin analog. Unresectable metastatic insulinomas may present as a major therapeutic challenge for the treating physician. When feasible, resection is the mainstay of treatment. Prevention of hypoglycemia is a crucial goal of therapy for unresectable/metastatic tumors. Diazoxide, hydrochlorothiazide, glucagon, and intravenous glucose infusions have been used for glycemic control yielding temporary and inconsistent results. Sandostatin and its long-acting depot forms have occasionally been used in the treatment of Octreoscan-positive insulinomas. Herein, we report a case of metastatic insulinoma with very difficult glycemic control successfully treated with the radiolabeled somatostatin analog lutetium (177LU). PMID:24455330

  13. The exoplanets analogy to the Multiverse

    CERN Document Server

    Kinouchi, Osame

    2015-01-01

    The idea of a Mutiverse is controversial, although it is a natural possible solution to particle physics and cosmological fine-tuning problems (FTPs). Here I explore the analogy between the Multiverse proposal and the proposal that there exist an infinite number of stellar systems with planets in a flat Universe, the Multiplanetverse. Although the measure problem is present in this scenario, the idea of a Multiplanetverse has predictive power, even in the absence of direct evidence for exoplanets that appeared since the 90s. We argue that the fine-tuning of Earth to life (and not only the fine-tuning of life to Earth) could predict with certainty the existence of exoplanets decades or even centuries before that direct evidence. Several other predictions can be made by studying only the Earth and the Sun, without any information about stars. The analogy also shows that theories that defend that the Earth is the unique existing planet and that, at the same time, is fine-tuned to life by pure chance (or pure phy...

  14. Isolation and characterization of soybean NBS analogs

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Isolation of plant resistance genes is greatly helpful to crop resistance breeding and the insight of resistance mechanism. The cloned plant resistance genes are classified into four classes according to their putative structural domain, of which the majority possesses nucleotide-binding site (NBS) domain that consists of P-loop, kinase2a and kinase3a. The conservation of this domain affords the potential possibility of cloning the plant resistance genes, which is homology-based cloning technique. In the present study, the degenerate oligonucleotide primers were designed according to the tobacco N and Arabidopsis RPS2, and 358 clones were isolated from the genomic DNA of resistance soybean cultivar Kefeng1, resistant to soybean mosaic virus, and 4 open-reading NBS analogs were finally characterized and designated as KNBS1, KNBS2, KNBS3 and KNBS4. Southern hybridization suggested that they were present with multicopy in the soybean genome; KNBS4 was mapped to F linkage group and KNBS2 co-located J linkage group with the SCAR marker of Rsa resistant to soybean mosaic virus by RFLP analysis. Northern analysis suggested that KNBS2- related sequence was low and constitutively expressed in the root, stem and leaves of soybean. The detailed characterization of NBS analogs is very helpful to ultimately cloning the soybean resistance gene.

  15. Antarctic Analog for Dilational Bands on Europa

    Science.gov (United States)

    Hurford, T. A.; Brunt, K. M.

    2014-01-01

    Europa's surface shows signs of extension, which is revealed as lithospheric dilation expressed along ridges, dilational bands and ridged bands. Ridges, the most common tectonic feature on Europa, comprise a central crack flanked by two raised banks a few hundred meters high on each side. Together these three classes may represent a continuum of formation. In Tufts' Dilational Model ridge formation is dominated by daily tidal cycling of a crack, which can be superimposed with regional secular dilation. The two sources of dilation can combine to form the various band morphologies observed. New GPS data along a rift on the Ross Ice Shelf, Antarctica is a suitable Earth analog to test the framework of Tufts' Dilational Model. As predicted by Tufts' Dilational Model, tensile failures in the Ross Ice Shelf exhibit secular dilation, upon which a tidal signal can be seen. From this analog we conclude that Tufts' Dilational Model for Europan ridges and bands may be credible and that the secular dilation is most likely from a regional source and not tidally driven.

  16. Comparison between analog and digital filters

    Directory of Open Access Journals (Sweden)

    Zoltan Erdei

    2010-12-01

    Full Text Available Digital signal processing(DSP is one of the most powerful technologies and will model science and engineering in the 21st century. Revolutionary changes have already been made in different areas of research such as communications, medical imaging, radar and sonar technology, high fidelity audio signal reproducing etc. Each of these fields developed a different signal processing technology with its own algorithms, mathematics and technology, Digital filters are used in two general directions: to separate mixed signals and to restore signals that were compromised in different modes. The objective of this paper is to compare some basic digital filters versus analog filters such as low-pass, high-pass, band-pass filters. Scientists and engineers comprehend that, in comparison with analog filters, digital filters can process the same signal in real-time with broader flexibility. This understanding is considered important to instill incentive for engineers to become interested in the field of DSP. The analysis of the results will be made using dedicated libraries in MATLAB and Simulink software, such as the Signal Processing Toolbox.

  17. Martian hillside gullies and icelandic analogs

    Science.gov (United States)

    Hartmann, William K.; Thorsteinsson, Thorsteinn; Sigurdsson, Freysteinn

    2003-04-01

    We report observations of Icelandic hillside gully systems that are near duplicates of gullies observed on high-latitude martian hillsides. The best Icelandic analogs involve basaltic talus slopes at the angle of repose, with gully formation by debris flows initiated by ground water saturation, and/or by drainage of water from upslope cliffs. We report not only the existence of Mars analog gullies, but also an erosional sequence of morphologic forms, found both on Mars and in Iceland. The observations support hypotheses calling for creation of martian gullies by aqueous processes. Issues remain whether the water in each case comes only from surficial sources, such as melting of ground ice or snow, or from underground sources such as aquifers that gain surface access in hillsides. Iceland has many examples of the former, but the latter mechanism is not ruled out. Our observations are consistent with the martian debris flow mechanism of F. Costard et al. (2001c, Science295, 110-113), except that classic debris flows begin at midslope more frequently than on Mars. From morphologic observations, we suggest that some martian hillside gully systems not only involve significant evolution by extended erosive activity, but gully formation may occur in episodes, and the time interval since the last episode is considerably less than the time interval needed to erase the gully through normal martian obliteration processes.

  18. Palytoxin and Analogs: Biological and Ecological Effects

    Directory of Open Access Journals (Sweden)

    Vítor Ramos

    2010-06-01

    Full Text Available Palytoxin (PTX is a potent marine toxin that was originally found in soft corals from tropical areas of the Pacific Ocean. Soon after, its occurrence was observed in numerous other marine organisms from the same ecological region. More recently, several analogs of PTX were discovered, remarkably all from species of the dinoflagellate genus Ostreopsis. Since these dinoflagellates are also found in other tropical and even in temperate regions, the formerly unsuspected broad distribution of these toxins was revealed. Toxicological studies with these compounds shows repeatedly low LD50 values in different mammals, revealing an acute toxic effect on several organs, as demonstrated by different routes of exposure. Bioassays tested for some marine invertebrates and evidences from environmental populations exposed to the toxins also give indications of the high impact that these compounds may have on natural food webs. The recognition of its wide distribution coupled with the poisoning effects that these toxins can have on animals and especially on humans have concerned the scientific community. In this paper, we review the current knowledge on the effects of PTX and its analogs on different organisms, exposing the impact that these toxins may have in coastal ecosystems.

  19. Metastatic Insulinoma Managed with Radiolabeled Somatostatin Analog

    Directory of Open Access Journals (Sweden)

    Ricardo Costa

    2013-01-01

    Full Text Available Insulinoma is a rare pancreatic neuroendocrine tumor. Overproduction of insulin and associated hypoglycemia are hallmark features of this disease. Diagnosis can be made through demonstration of hypoglycemia and elevated plasma levels of insulin or C-Peptide. Metastatic disease can be detected through computerized tomography (CT scans, magnetic resonance imaging (MRI, and positron emission tomography (PET/CT. Somatostatin receptor scintigraphy can be used not only to document metastatic disease but also as a predictive marker of the benefit from therapy with radiolabeled somatostatin analog. Unresectable metastatic insulinomas may present as a major therapeutic challenge for the treating physician. When feasible, resection is the mainstay of treatment. Prevention of hypoglycemia is a crucial goal of therapy for unresectable/metastatic tumors. Diazoxide, hydrochlorothiazide, glucagon, and intravenous glucose infusions have been used for glycemic control yielding temporary and inconsistent results. Sandostatin and its long-acting depot forms have occasionally been used in the treatment of Octreoscan-positive insulinomas. Herein, we report a case of metastatic insulinoma with very difficult glycemic control successfully treated with the radiolabeled somatostatin analog lutetium (177LU.

  20. Operational Lessons Learned from NASA Analog Missions

    Science.gov (United States)

    Arnold, Larissa S.

    2010-01-01

    vehicle and system capabilities are required to support the activities? How will the crew and the Earth-based mission control team interact? During the initial phases of manned planetary exploration, one challenge in particular is virtually the same as during the Apollo program: How can scientific return be maximized during a relatively short surface mission? Today, NASA is investigating solutions to these challenges by conducting analog missions. These Earth-based missions possess characteristics that are analogous to missions on the Moon or Mars. These missions are excellent for testing operational concepts, and the design, configuration, and functionality of spacesuits, robots, rovers, and habitats. Analog mission crews test specific techniques and procedures for surface field geology, biological sample collection, and planetary protection. The process of actually working an analog mission reveals a myriad of small details, which either contribute to or impede efficient operations, many of which would never have been thought about otherwise. It also helps to define the suite of tools, containers, and other small equipment that surface explorers will use. This paper focuses on how analog missions have addressed selected operational considerations for future planetary missions.