Beltran, H.; Perez, E.; Chen, Zhe
2009-01-01
This paper describes a Fixed Maximum Power Point analog control used in a step-down Pulse Width Modulated power converter. The DC/DC converter drives a DC motor used in small water pumping installations, without any electric storage device. The power supply is provided by PV panels working around....... The proposed Optimal Power Point fix voltage control system is analyzed in comparison to other complex controls....... their maximum power point, with a fixed operating voltage value. The control circuit implementation is not only simple and cheap, but also robust and reliable. System protections and adjustments are also proposed. Simulations and hardware are reported in the paper for a 150W water pumping application system...
47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Analog video transmissions in the Fixed-Satellite Services. 25.211 Section 25.211 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions...
Kitamura, Kazuyoshi; Chiba, Tatsuya; Mabuchi, Fumihiko; Ishijima, Kiyotaka; Omoto, Shu; Kashiwagi, Fumiko; Godo, Takashi; Kogure, Satoshi; Goto, Teruhiko; Shibuya, Takashi; Tanabe, Jhoji; Tsukahara, Shigeo; Tsuchiya, Tadaharu; Tokunaga, Takaharu; Hosaka, Osamu; Saito, Tetsunori
2018-01-01
Purpose To assess the efficacy and safety of switching from prostaglandin analog (PGA) monotherapy to tafluprost/timolol fixed-combination (Taf/Tim) therapy. Subjects and Methods Patients with primary open-angle glaucoma, normal-tension glaucoma, or ocular hypertension who had received PGA monotherapy for at least 3 months were enrolled. Patients were examined at 1, 2, and 3 months after changing therapies. Subsequently, the patients were returned to PGA monotherapy. The examined parameters included intraocular pressure (IOP) and adverse events. A questionnaire survey was conducted after the switch to Taf/Tim therapy. Results Forty patients with a mean age of 66.5 ± 10.3 years were enrolled; 39 of these patients completed the study protocol. Switching to Taf/Tim significantly reduced the IOP from 18.2 ± 2.6 mmHg at baseline to 14.8 ± 2.5 mmHg at 1 month, 15.2 ± 2.8 mmHg at 2 months, and 14.9 ± 2.5 mmHg at 3 months (P Taf/Tim reduced the pulse rate insignificantly. No significant differences were observed in blood pressure, conjunctival hyperemia, or corneal adverse events. A questionnaire showed that the introduction of Taf/Tim did not significantly influence symptoms. Conclusions Compared with PGA monotherapy, Taf/Tim fixed-combination therapy significantly reduced IOP without severe adverse events. PMID:29675274
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Narrowband analog transmissions, digital transmissions, and video transmissions in the GSO Fixed-Satellite Service. 25.212 Section 25.212 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS...
Fixed-head star tracker magnitude calibration on the solar maximum mission
Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.
1990-01-01
The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are
Stolze, M.; Gogova, D.; Thomas, L.-K.
2005-01-01
Tungsten oxide films with varied substoichiometry were deposited by reactive DC-sputtering from a W target at different oxygen partial pressures. The inherent maximum achievable electrochemical colouration and electrocolouration were found to be dependent on the oxygen content and consequently on the film substoichiometry. These results were related to those obtained in a previous study by colouration with hydrogen spillover from a catalyst (gasochromic colouration) of films fabricated in the same way. A clear analogy among the colouration by the three different techniques appeared
Fixed Parameter Evolutionary Algorithms and Maximum Leaf Spanning Trees: A Matter of Mutations
Kratsch, Stefan; Lehre, Per Kristian; Neumann, Frank
2011-01-01
Evolutionary algorithms have been shown to be very successful for a wide range of NP-hard combinatorial optimization problems. We investigate the NP-hard problem of computing a spanning tree that has a maximal number of leaves by evolutionary algorithms in the context of fixed parameter tractabil...... two common mutation operators, we show that an operator related to spanning tree problems leads to an FPT running time in contrast to a general mutation operator that does not have this property....
Thompson, R. H.; Gambardella, P. J.
1980-01-01
The Solar Maximum Mission (SMM) spacecraft provides an excellent opportunity for evaluating attitude determination accuracies achievable with tracking instruments such as fixed head star trackers (FHSTs). As a part of its payload, SMM carries a highly accurate fine pointing Sun sensor (FPSS). The EPSS provides an independent check of the pitch and yaw parameters computed from observations of stars in the FHST field of view. A method to determine the alignment of the FHSTs relative to the FPSS using spacecraft data is applied. Two methods that were used to determine distortions in the 8 degree by 8 degree field of view of the FHSTs using spacecraft data are also presented. The attitude determination accuracy performance of the in flight calibrated FHSTs is evaluated.
Bellver-Cebreros, Consuelo; Rodriguez-Danta, Marcelo
2009-01-01
An apparently unnoticed analogy between the torque-free motion of a rotating rigid body about a fixed point and the propagation of light in anisotropic media is stated. First, a new plane construction for visualizing this torque-free motion is proposed. This method uses an intrinsic representation alternative to angular momentum and independent of…
Cholet, M.; Minerbe, F.; Oliviero, G.; Pestel, V.; Frémont, F.
2014-01-01
Highlights: • Young type interferences with electrons are revisited. • Oscillations in the angular distribution of the energy maximum of Auger spectra are evidenced. • Model calculations are in good agreement with the experimental result. • The position of the Auger spectra oscillates in counterphase with the total intensity. - Abstract: In this article, we present experimental evidence of a particular electron-interference phenomenon. The electrons are provided by autoionization of 2l2l′ doubly excited He atoms following the capture of H 2 electrons by a slow He 2+ incoming ion. We observe that the position of the energy maximum of the Auger structures oscillates with the detection angle. Calculation based on a simple model that includes interferences clearly shows that the present oscillations are due to Young-type interferences caused by electrons scattering on both H + centers
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
Costa VP
2012-05-01
Full Text Available Vital Paulino Costa1, Hamilton Moreira2, Mauricio Della Paolera3, Maria Rosa Bet de Moraes Silva41Universidade Estadual de Campinas – UNICAMP, São Paulo, 2Universidade Federal do Paraná, Curitiba, 3Santa Casa de Misericórdia de São Paulo, São Paulo, 4Faculdade de Medicina de Botucatu, UNESP, BrazilPurpose: To assess the safety and efficacy of transitioning patients whose intraocular pressure (IOP had been insufficiently controlled on prostaglandin analog (PGA monotherapy to treatment with travoprost 0.004%/timolol 0.5% fixed combination with benzalkonium chloride (TTFC.Methods: This prospective, multicenter, open-label, historical controlled, single-arm study transitioned patients who had primary open-angle glaucoma, pigment dispersion glaucoma, or ocular hypertension and who required further IOP reduction from PGA monotherapy to once-daily treatment with TTFC for 12 weeks. IOP and safety (adverse events, corrected distance visual acuity, and slit-lamp biomicroscopy were assessed at baseline, week 4, and week 12. A solicited ocular symptom survey was administered at baseline and at week 12. Patients and investigators reported their medication preference at week 12.Results: Of 65 patients enrolled, 43 had received prior travoprost therapy and 22 had received prior nontravoprost therapy (n = 18, bimatoprost; n = 4, latanoprost. In the total population, mean IOP was significantly reduced from baseline (P = 0.000009, showing a 16.8% reduction after 12 weeks of TTFC therapy. In the study subgroups, mean IOP was significantly reduced from baseline to week 12 (P = 0.0001 in the prior travoprost cohort (19.0% reduction and in the prior nontravoprost cohort (13.1% reduction. Seven mild, ocular, treatment-related adverse events were reported. Of the ten ocular symptom questions, eight had numerically lower percentages with TTFC compared with prior PGA monotherapy and two had numerically higher percentages with TTFC (dry eye symptoms and ocular
1980-01-01
The purpose of this Order is to raise the maximum liability of the nuclear operator to one milliard Belgium francs per nuclear incident. This measure was taken with a view to keeping the operator's maximum liability at least at a constant value. (NEA) [fr
Cholet, M.; Minerbe, F.; Oliviero, G.; Pestel, V. [Université de Caen, 6 bd du Mal Juin, 14050 Caen Cedex (France); Frémont, F., E-mail: francois.fremont@ensicaen.fr [Centre de Recherche sur les Ions, les Matériaux et la Photonique, Unité Mixte Université de Caen-CEA-CNRS-EnsiCaen, 6 bd du Mal Juin, 14050 Caen Cedex 4 (France)
2014-08-15
Highlights: • Young type interferences with electrons are revisited. • Oscillations in the angular distribution of the energy maximum of Auger spectra are evidenced. • Model calculations are in good agreement with the experimental result. • The position of the Auger spectra oscillates in counterphase with the total intensity. - Abstract: In this article, we present experimental evidence of a particular electron-interference phenomenon. The electrons are provided by autoionization of 2l2l′ doubly excited He atoms following the capture of H{sub 2} electrons by a slow He{sup 2+} incoming ion. We observe that the position of the energy maximum of the Auger structures oscillates with the detection angle. Calculation based on a simple model that includes interferences clearly shows that the present oscillations are due to Young-type interferences caused by electrons scattering on both H{sup +} centers.
Charlier, G.W.P.
1994-01-01
In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator, based on a discontinuous objective function, and proved its consistency under weak distributional assumptions. However, the rate of convergence of this estimator is low (N)
Reynolds analogy for the Rayleigh problem at various flow modes.
Abramov, A A; Butkovskii, A V
2016-07-01
The Reynolds analogy and the extended Reynolds analogy for the Rayleigh problem are considered. For a viscous incompressible fluid we derive the Reynolds analogy as a function of the Prandtl number and the Eckert number. We show that for any positive Eckert number, the Reynolds analogy as a function of the Prandtl number has a maximum. For a monatomic gas in the transitional flow regime, using the direct simulation Monte Carlo method, we investigate the extended Reynolds analogy, i.e., the relation between the shear stress and the energy flux transferred to the boundary surface, at different velocities and temperatures. We find that the extended Reynolds analogy for a rarefied monatomic gas flow with the temperature of the undisturbed gas equal to the surface temperature depends weakly on time and is close to 0.5. We show that at any fixed dimensionless time the extended Reynolds analogy depends on the plate velocity and temperature and undisturbed gas temperature mainly via the Eckert number. For Eckert numbers of the order of unity or less we generalize an extended Reynolds analogy. The generalized Reynolds analogy depends mainly only on dimensionless time for all considered Eckert numbers of the order of unity or less.
Ulmann, Bernd
2013-01-01
This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Hofmann, R.B.
1995-01-01
Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository
Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:
2008-01-01
Ansambel Fix peab 13. detsembril Tallinnas Saku Suurhallis oma 40. sünnipäeva. Kontserdi erikülaline on ansambel Apelsin, kaastegevad Jassi Zahharov ja HaleBopp Singers. Õhtut juhib Tarmo Leinatamm
A portable storage maximum thermometer
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Science Teachers' Analogical Reasoning
Mozzer, Nilmara Braga; Justi, Rosária
2013-08-01
Analogies can play a relevant role in students' learning. However, for the effective use of analogies, teachers should not only have a well-prepared repertoire of validated analogies, which could serve as bridges between the students' prior knowledge and the scientific knowledge they desire them to understand, but also know how to introduce analogies in their lessons. Both aspects have been discussed in the literature in the last few decades. However, almost nothing is known about how teachers draw their own analogies for instructional purposes or, in other words, about how they reason analogically when planning and conducting teaching. This is the focus of this paper. Six secondary teachers were individually interviewed; the aim was to characterize how they perform each of the analogical reasoning subprocesses, as well as to identify their views on analogies and their use in science teaching. The results were analyzed by considering elements of both theories about analogical reasoning: the structural mapping proposed by Gentner and the analogical mechanism described by Vosniadou. A comprehensive discussion of our results makes it evident that teachers' content knowledge on scientific topics and on analogies as well as their pedagogical content knowledge on the use of analogies influence all their analogical reasoning subprocesses. Our results also point to the need for improving teachers' knowledge about analogies and their ability to perform analogical reasoning.
Analog subsystem for the plutonium protection system
Arlowe, H.D.
1978-12-01
An analog subsystem is described which monitors certain functions in the Plutonium Protection System. Rotary and linear potentiometer output signals are digitized, as are the outputs from thermistors and container ''bulge'' sensors. This work was sponsored by the Department of Energy/Office of Safeguards and Security (DOE/OSS) as part of the overall Sandia Fixed Facility Physical Protection Program
Intuitive analog circuit design
Thompson, Marc
2013-01-01
Intuitive Analog Circuit Design outlines ways of thinking about analog circuits and systems that let you develop a feel for what a good, working analog circuit design should be. This book reflects author Marc Thompson's 30 years of experience designing analog and power electronics circuits and teaching graduate-level analog circuit design, and is the ideal reference for anyone who needs a straightforward introduction to the subject. In this book, Dr. Thompson describes intuitive and ""back-of-the-envelope"" techniques for designing and analyzing analog circuits, including transistor amplifi
Aragonès, Enriqueta; Gilboa, Itzhak; Postlewaite, Andrew; Schmeidler, David; Universitat Autònoma de Barcelona. Unitat de Fonaments de l'Anàlisi Econòmica; Universitat Autònoma de Barcelona. Institut d'Anàlisi Econòmica
2013-01-01
The art of rhetoric may be defined as changing other people's minds (opinions, beliefs) without providing them new information. One tech- nique heavily used by rhetoric employs analogies. Using analogies, one may draw the listener's attention to similarities between cases and to re-organize existing information in a way that highlights certain reg- ularities. In this paper we offer two models of analogies, discuss their theoretical equivalence, and show that finding good analogies is a com- p...
HAPS, a Handy Analog Programming System
Højberg, Kristian Søe
1975-01-01
HAPS (Hybrid Analog Programming System) is an analog compiler that can be run on a minicomputer in an interactive mode. Essentially HAPS is written in FORTRAN. The equations to be programmed for an ana log computer are read in by using a FORTRAN-like notation. The input must contain maximum...... and emphasizes the limitations HAPS puts on equation structure, types of computing circuit, scaling, and static testing....
Hyndman, D E
2013-01-01
Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl
Stefanovic, Danica
2008-01-01
Structured Analog CMOS Design describes a structured analog design approach that makes it possible to simplify complex analog design problems and develop a design strategy that can be used for the design of large number of analog cells. It intentionally avoids treating the analog design as a mathematical problem, developing a design procedure based on the understanding of device physics and approximations that give insight into parameter interdependences. The proposed transistor-level design procedure is based on the EKV modeling approach and relies on the device inversion level as a fundament
Detecting analogies unconsciously
Thomas Peter Reber
2014-01-01
Full Text Available Analogies may arise from the conscious detection of similarities between a present and a past situation. In this functional magnetic resonance imaging study, we tested whether young volunteers would detect analogies unconsciously between a current supraliminal (visible and a past subliminal (invisible situation. The subliminal encoding of the past situation precludes awareness of analogy detection in the current situation. First, participants encoded subliminal pairs of unrelated words in either one or nine encoding trials. Later, they judged the semantic fit of supraliminally presented new words that either retained a previously encoded semantic relation (‘analog’ or not (‘broken analog’. Words in analogs versus broken analogs were judged closer semantically, which reflects unconscious analogy detection. Hippocampal activity associated with subliminal encoding correlated with the behavioral measure of unconscious analogy detection. Analogs versus broken analogs were processed with reduced prefrontal but enhanced medial temporal activity. We conclude that analogous episodes can be detected even unconsciously drawing on the episodic memory network.
The price of fixed income market volatility
Mele, Antonio
2015-01-01
Fixed income volatility and equity volatility evolve heterogeneously over time, co-moving disproportionately during periods of global imbalances and each reacting to events of different nature. While the methodology for options-based "model-free" pricing of equity volatility has been known for some time, little is known about analogous methodologies for pricing various fixed income volatilities. This book fills this gap and provides a unified evaluation framework of fixed income volatility while dealing with disparate markets such as interest-rate swaps, government bonds, time-deposits and credit. It develops model-free, forward looking indexes of fixed-income volatility that match different quoting conventions across various markets, and uncovers subtle yet important pitfalls arising from naïve superimpositions of the standard equity volatility methodology when pricing various fixed income volatilities. The ultimate goal of the authors´ efforts is to make interest rate volatility standardization a valuable...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Dobkin, Bob
2012-01-01
Analog circuit and system design today is more essential than ever before. With the growth of digital systems, wireless communications, complex industrial and automotive systems, designers are being challenged to develop sophisticated analog solutions. This comprehensive source book of circuit design solutions aids engineers with elegant and practical design techniques that focus on common analog challenges. The book's in-depth application examples provide insight into circuit design and application solutions that you can apply in today's demanding designs. <
Sarpeshkar, R
2014-03-28
We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.
Combined analog-to-digital converter
Zhukov, A.V.; Rzhendinskaya, S.N.
1983-01-01
A 10-bit analog-to-digital converter (ADC) designed for operating in spectrometers with time-dependent filters is described. The ADC operation is based on combining the parallel reading and sequential counting methods. At maximum conversion time of 12 μs, timing series frequency of 25 MHz and foUr reference levels the differential nonlinearity withoUt statistical smoothing (maximum relative channel width deviation from average value) is not more than 4%
Lin, Shih-Yin; Singh, Chandralekha
2011-01-01
Learning physics requires understanding the applicability of fundamental principles in a variety of contexts that share deep features. One way to help students learn physics is via analogical reasoning. Students can be taught to make an analogy between situations that are more familiar or easier to understand and another situation where the same…
Baser, Mustafa
2007-01-01
Students have difficulties in physics because of the abstract nature of concepts and principles. One of the effective methods for overcoming students' difficulties is the use of analogies to visualize abstract concepts to promote conceptual understanding. According to Iding, analogies are consistent with the tenets of constructivist learning…
Optical analogy. Synthesis report
1965-01-01
The authors report the study of conditions under which light attenuation (reflection, diffusion, absorption) and the attenuation of some radiations (notably thermal neutrons) can be described with analogical calculations. The analogy between light physical properties and neutron properties is not searched for, but the analogy between their attenuation characteristics. After having discussed this possible analogy, they propose a mathematical formulation of neutron and optical phenomena which could theoretically justify the optical analogy. The second part reports a more practical study of optics problems such as the study of simple optics materials and illumination measurements, or more precisely the study of angular distributions of optical reflections, a determination of such angular distributions, and an experimental determination of the albedo
Performance of MSGC with analog pipeline readout
Gomez, F.; Adeva, B.; Gracia, G.; Lopez, M.A.; Nunez, T.; Pazos, A.; Plo, M.; Rodriguez, A.; Santamarina, C.; Vazquez, P.
1997-01-01
We analyse some of the performance characteristics of a chromium MSGC operated with Ar-DME 50%-50% in a test beam at CERN. Excellent signal-to-noise ratio and efficiency has been achieved with this gas mixture using cathode analog pipeline readout. We also determine optimal parameters for the sampling algorithm in order to work in a random trigger experiment (fixed target). (orig.)
Malav, O P; Talukder, S; Gokulakrishnan, P; Chand, S
2015-01-01
The health-conscious consumers are in search of nutritious and convenient food item which can be best suited in their busy life. The vegetarianism is the key for the search of such food which resembles the meat in respect of nutrition and sensory characters, but not of animal origin and contains vegetable or its modified form, this is the point when meat analog evolved out and gets shape. The consumers gets full satisfaction by consumption of meat analog due to its typical meaty texture, appearance and the flavor which are being imparted during the skilled production of meat analog. The supplement of protein in vegetarian diet through meat alike food can be fulfilled by incorporating protein-rich vegetative food grade materials in meat analog and by adopting proper technological process which can promote the proper fabrication of meat analog with acceptable meat like texture, appearance, flavor, etc. The easily available vegetables, cereals, and pulses in India have great advantages and prospects to be used in food products and it can improve the nutritional and functional characters of the food items. The various form and functional characters of food items are available world over and attracts the meat technologists and the food processors to bring some innovativeness in meat analog and its presentation and marketability so that the acceptability of meat analog can be overgrown by the consumers.
Troubleshooting analog circuits
Pease, Robert A
1991-01-01
Troubleshooting Analog Circuits is a guidebook for solving product or process related problems in analog circuits. The book also provides advice in selecting equipment, preventing problems, and general tips. The coverage of the book includes the philosophy of troubleshooting; the modes of failure of various components; and preventive measures. The text also deals with the active components of analog circuits, including diodes and rectifiers, optically coupled devices, solar cells, and batteries. The book will be of great use to both students and practitioners of electronics engineering. Other
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
Hickman, Ian
2013-01-01
Analog Circuits Cookbook presents articles about advanced circuit techniques, components and concepts, useful IC for analog signal processing in the audio range, direct digital synthesis, and ingenious video op-amp. The book also includes articles about amplitude measurements on RF signals, linear optical imager, power supplies and devices, and RF circuits and techniques. Professionals and students of electrical engineering will find the book informative and useful.
Zamora, Paul O [Gaithersburg, MD; Pena, Louis A [Poquott, NY; Lin, Xinhua [Plainview, NY; Takahashi, Kazuyuki [Germantown, MD
2012-07-24
The present invention provides a fibroblast growth factor heparin-binding analog of the formula: ##STR00001## where R.sub.1, R.sub.2, R.sub.3, R.sub.4, R.sub.5, X, Y and Z are as defined, pharmaceutical compositions, coating compositions and medical devices including the fibroblast growth factor heparin-binding analog of the foregoing formula, and methods and uses thereof.
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Electrical Circuits and Water Analogies
Smith, Frederick A.; Wilson, Jerry D.
1974-01-01
Briefly describes water analogies for electrical circuits and presents plans for the construction of apparatus to demonstrate these analogies. Demonstrations include series circuits, parallel circuits, and capacitors. (GS)
Allen, Phillip E
1987-01-01
This text presents the principles and techniques for designing analog circuits to be implemented in a CMOS technology. The level is appropriate for seniors and graduate students familiar with basic electronics, including biasing, modeling, circuit analysis, and some familiarity with frequency response. Students learn the methodology of analog integrated circuit design through a hierarchically-oriented approach to the subject that provides thorough background and practical guidance for designing CMOS analog circuits, including modeling, simulation, and testing. The authors' vast industrial experience and knowledge is reflected in the circuits, techniques, and principles presented. They even identify the many common pitfalls that lie in the path of the beginning designer--expert advice from veteran designers. The text mixes the academic and practical viewpoints in a treatment that is neither superficial nor overly detailed, providing the perfect balance.
Analogical Reasoning in Geometry Education
Magdas, Ioana
2015-01-01
The analogical reasoning isn't used only in mathematics but also in everyday life. In this article we approach the analogical reasoning in Geometry Education. The novelty of this article is a classification of geometrical analogies by reasoning type and their exemplification. Our classification includes: analogies for understanding and setting a…
Digital and analog communication systems
Shanmugam, K. S.
1979-01-01
The book presents an introductory treatment of digital and analog communication systems with emphasis on digital systems. Attention is given to the following topics: systems and signal analysis, random signal theory, information and channel capacity, baseband data transmission, analog signal transmission, noise in analog communication systems, digital carrier modulation schemes, error control coding, and the digital transmission of analog signals.
Density estimation by maximum quantum entropy
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Analogs for transuranic elements
Weimer, W.C.; Laul, J.C.; Kutt, J.C.
1981-01-01
A combined theoretical and experimental approach is being used to estimate the long-term environmental and biogeochemical behaviors of selected transuranic elements. The objective of this research is to estimate the effect that long-term (hundreds of years) environmental weathering has on the behavior of the transuranic elements americium and curium. This is achieved by investigating the actual behavior of naturally occurring rare earth elements, especially neodymium, that serve as transuranic analogs. Determination of the analog element behavior provides data that can be used to estimate the ultimate availability to man of transuranic materials released into the environment
Farr, T. G.; Arcone, S.; Arvidson, R. W.; Baker, V.; Barlow, N. G.; Beaty, D.; Bell, M. S.; Blankenship, D. D.; Bridges, N.; Briggs, G.; Bulmer, M.; Carsey, F.; Clifford, S. M.; Craddock, R. A.; Dickerson, P. W.; Duxbury, N.; Galford, G. L.; Garvin, J.; Grant, J.; Green, J. R.; Gregg, T. K. P.; Guinness, E.; Hansen, V. L.; Hecht, M. H.; Holt, J.; Howard, A.; Keszthelyi, L. P.; Lee, P.; Lanagan, P. D.; Lentz, R. C. F.; Leverington, D. W.; Marinangeli, L.; Moersch, J. E.; Morris-Smith, P. A.; Mouginis-Mark, P.; Olhoeft, G. R.; Ori, G. G.; Paillou, P.; Reilly, J. F., II; Rice, J. W., Jr.; Robinson, C. A.; Sheridan, M.; Snook, K.; Thomson, B. J.; Watson, K.; Williams, K.; Yoshikawa, K.
2002-08-01
It is well recognized that interpretations of Mars must begin with the Earth as a reference. The most successful comparisons have focused on understanding geologic processes on the Earth well enough to extrapolate to Mars' environment. Several facets of terrestrial analog studies have been pursued and are continuing. These studies include field workshops, characterization of terrestrial analog sites, instrument tests, laboratory measurements (including analysis of Martian meteorites), and computer and laboratory modeling. The combination of all these activities allows scientists to constrain the processes operating in specific terrestrial environments and extrapolate how similar processes could affect Mars. The Terrestrial Analogs for Mars Community Panel has considered the following two key questions: (1) How do terrestrial analog studies tie in to the Mars Exploration Payload Assessment Group science questions about life, past climate, and geologic evolution of Mars, and (2) How can future instrumentation be used to address these questions. The panel has considered the issues of data collection, value of field workshops, data archiving, laboratory measurements and modeling, human exploration issues, association with other areas of solar system exploration, and education and public outreach activities.
Reasoning through Instructional Analogies
Kapon, Shulamit; diSessa, Andrea A.
2012-01-01
This article aims to account for students' assessments of the plausibility and applicability of analogical explanations, and individual differences in these assessments, by analyzing properties of students' underlying knowledge systems. We developed a model of explanation and change in explanation focusing on knowledge elements that provide a…
David Botting
2012-03-01
Full Text Available I will show that there is a type of analogical reasoning that instantiates a pattern of reasoning in confirmation theory that is considered at best paradoxical and at worst fatal to the entire syntactical approach to confirmation and explanation. However, I hope to elaborate conditions under which this is a sound (although not necessarily strong method of reasoning.
Analogy, explanation, and proof
Hummel, John E.; Licato, John; Bringsjord, Selmer
2014-01-01
People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence. PMID:25414655
Hofstadter, Doug
2004-01-01
Many new ideas in theoretical physics come from analogies to older ideas in physics. For instance, the abstract notion of 'isospin' (or isotopic spin) originated in the prior concept of 'spin' (quantized angular momentum); likewise, the concept of 'phonon' (quantum of sound, or quantized collective excitation of a crystal) was based on the prior concept of 'photon' (quantum of light, or quantized element of the electromagnetic field). But these two examples, far from being exceptions, in fact represent the bread and butter of inventive thinking in physics. In a nutshell, intraphysics analogy-making -- borrowing by analogy with something already known in another area of physics -- is central to the progress of physics. The aim of this talk is to reveal the pervasiveness -- indeed, the indispensability -- of this kind of semi-irrational, wholly intuitive type of thinking (as opposed to more deductive mathematical inference) in the mental activity known as 'doing physics'. Speculations as to why wild analogical leaps are so crucial to the act of discovery in physics (as opposed to other disciplines) will be offered.
Zak, M.
1998-01-01
Quantum analog computing is based upon similarity between mathematical formalism of quantum mechanics and phenomena to be computed. It exploits a dynamical convergence of several competing phenomena to an attractor which can represent an externum of a function, an image, a solution to a system of ODE, or a stochastic process.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Christiansen, D.W.
1982-01-01
This is a reusable system for fixing a nuclear reactor fuel rod to a support. An interlock cap is fixed to the fuel rod and an interlock strip is fixed to the support. The interlock cap has two opposed fingers, which are shaped so that a base is formed with a body part. The interlock strip has an extension, which is shaped so that this is rigidly fixed to the body part of the base. The fingers of the interlock cap are elastic in bending. To fix it, the interlock cap is pushed longitudinally on to the interlock strip, which causes the extension to bend the fingers open in order to engage with the body part of the base. To remove it, the procedure is reversed. (orig.) [de
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Measures of Noncircularity and Fixed Points of Contractive Multifunctions
Marrero Isabel
2010-01-01
Full Text Available In analogy to the Eisenfeld-Lakshmikantham measure of nonconvexity and the Hausdorff measure of noncompactness, we introduce two mutually equivalent measures of noncircularity for Banach spaces satisfying a Cantor type property, and apply them to establish a fixed point theorem of Darbo type for multifunctions. Namely, we prove that every multifunction with closed values, defined on a closed set and contractive with respect to any one of these measures, has the origin as a fixed point.
Terrestrial Spaceflight Analogs: Antarctica
Crucian, Brian
2013-01-01
Alterations in immune cell distribution and function, circadian misalignment, stress and latent viral reactivation appear to persist during Antarctic winterover at Concordia Station. Some of these changes are similar to those observed in Astronauts, either during or immediately following spaceflight. Others are unique to the Concordia analog. Based on some initial immune data and environmental conditions, Concordia winterover may be an appropriate analog for some flight-associated immune system changes and mission stress effects. An ongoing smaller control study at Neumayer III will address the influence of the hypoxic variable. Changes were observed in the peripheral blood leukocyte distribution consistent with immune mobilization, and similar to those observed during spaceflight. Alterations in cytokine production profiles were observed during winterover that are distinct from those observed during spaceflight, but potentially consistent with those observed during persistent hypobaric hypoxia. The reactivation of latent herpesviruses was observed during overwinter/isolation, that is consistently associated with dysregulation in immune function.
Analog storage integrated circuit
Walker, J.T.; Larsen, R.S.; Shapiro, S.L.
1989-03-07
A high speed data storage array is defined utilizing a unique cell design for high speed sampling of a rapidly changing signal. Each cell of the array includes two input gates between the signal input and a storage capacitor. The gates are controlled by a high speed row clock and low speed column clock so that the instantaneous analog value of the signal is only sampled and stored by each cell on coincidence of the two clocks. 6 figs.
Analogy, Explanation, and Proof
John eHummel
2014-11-01
Full Text Available People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic whose truth was not even known prior to the existence of the explanation (proof. What do the cognitive operations underlying the (inductive inference that the milk is sour have in common with the (deductive proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This small-seeming difference poses a challenge to the task of marshaling our understanding of analogical reasoning in the service of understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.
Component Processes in Analogical Reasoning
Sternberg, Robert J.
1977-01-01
Describes alternative theoretical positions regarding (a) the component information processes used in analogical reasoning and (b) strategies for combining these processes. Also presents results from three experiments on analogical reasoning. (Author/RK)
Inductive, Analogical, and Communicative Generalization
Adri Smaling
2003-03-01
Full Text Available Three forms of inductive generalization - statistical generalization, variation-based generalization and theory-carried generalization - are insufficient concerning case-to-case generalization, which is a form of analogical generalization. The quality of case-to-case generalization needs to be reinforced by setting up explicit analogical argumentation. To evaluate analogical argumentation six criteria are discussed. Good analogical reasoning is an indispensable support to forms of communicative generalization - receptive and responsive (participative generalization — as well as exemplary generalization.
Analogical Reasoning and Computer Programming.
Clement, Catherine A.; And Others
1986-01-01
A study of correlations between analogical reasoning and Logo programming mastery among female high school students related the results of pretests of analogical reasoning to posttests of programming mastery. A significant correlation was found between analogical reasoning and the ability to write subprocedures for use in several different…
Fixed automated spray technology.
2011-04-19
This research project evaluated the construction and performance of Boschungs Fixed Automated : Spray Technology (FAST) system. The FAST system automatically sprays de-icing material on : the bridge when icing conditions are about to occur. The FA...
Associative memory in an analog iterated-map neural network
Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.
1990-03-01
The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.
Fixed mobile convergence handbook
Ahson, Syed A
2010-01-01
From basic concepts to future directions, this handbook provides technical information on all aspects of fixed-mobile convergence (FMC). The book examines such topics as integrated management architecture, business trends and strategic implications for service providers, personal area networks, mobile controlled handover methods, SIP-based session mobility, and supervisory and notification aggregator service. Case studies are used to illustrate technical and systematic implementation of unified and rationalized internet access by fixed-mobile network convergence. The text examines the technolo
Analogical scaffolding: Making meaning in physics through representation and analogy
Podolefsky, Noah Solomon
This work reviews the literature on analogy, introduces a new model of analogy, and presents a series of experiments that test and confirm the utility of this model to describe and predict student learning in physics with analogy. Pilot studies demonstrate that representations (e.g., diagrams) can play a key role in students' use of analogy. A new model of analogy, Analogical Scaffolding, is developed to explain these initial empirical results. This model will be described in detail, and then applied to describe and predict the outcomes of further experiments. Two large-scale (N>100) studies will demonstrate that: (1) students taught with analogies, according to the Analogical Scaffolding model, outperform students taught without analogies on pre-post assessments focused on electromagnetic waves; (2) the representational forms used to teach with analogy can play a significant role in student learning, with students in one treatment group outperforming students in other treatment groups by factors of two or three. It will be demonstrated that Analogical Scaffolding can be used to predict these results, as well as finer-grained results such as the types of distracters students choose in different treatment groups, and to describe and analyze student reasoning in interviews. Abstraction in physics is reconsidered using Analogical Scaffolding. An operational definition of abstraction is developed within the Analogical Scaffolding framework and employed to explain (a) why physicists consider some ideas more abstract than others in physics, and (b) how students conceptions of these ideas can be modeled. This new approach to abstraction suggests novel approaches to curriculum design in physics using Analogical Scaffolding.
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Izadi, F A; Bagirov, G
2009-01-01
With its origins stretching back several centuries, discrete calculus is now an increasingly central methodology for many problems related to discrete systems and algorithms. The topics covered here usually arise in many branches of science and technology, especially in discrete mathematics, numerical analysis, statistics and probability theory as well as in electrical engineering, but our viewpoint here is that these topics belong to a much more general realm of mathematics; namely calculus and differential equations because of the remarkable analogy of the subject to this branch of mathemati
ESD analog circuits and design
Voldman, Steven H
2014-01-01
A comprehensive and in-depth review of analog circuit layout, schematic architecture, device, power network and ESD design This book will provide a balanced overview of analog circuit design layout, analog circuit schematic development, architecture of chips, and ESD design. It will start at an introductory level and will bring the reader right up to the state-of-the-art. Two critical design aspects for analog and power integrated circuits are combined. The first design aspect covers analog circuit design techniques to achieve the desired circuit performance. The second and main aspect pres
Albert Einstein, Analogizer Extraordinaire
CERN. Geneva
2007-01-01
Where does deep insight in physics come from? It is tempting to think that it comes from the purest and most precise of reasoning, following ironclad laws of thought that compel the clear mind completely rigidly. And yet the truth is quite otherwise. One finds, when one looks closely at any major discovery, that the greatest of physicists are, in some sense, the most crazily daring and irrational of all physicists. Albert Einstein exemplifies this thesis in spades. In this talk I will describe the key role, throughout Albert Einstein's fabulously creative life, played by wild guesses made by analogy lacking any basis whatsoever in pure reasoning. In particular, in this year of 2007, the centenary of 1907, I will describe how over the course of two years (1905 through 1907) of pondering, Einstein slowly came, via analogy, to understand the full, radical consequences of the equation that he had first discovered and published in 1905, arguably the most famous equation of all time: E = mc2.
12 CFR 619.9170 - Fixed interest rate.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Fixed interest rate. 619.9170 Section 619.9170 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM DEFINITIONS § 619.9170 Fixed interest rate. The rate of interest specified in the note or loan document which will prevail as the maximum...
Analog fault diagnosis by inverse problem technique
Ahmed, Rania F.
2011-12-01
A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Quantum entanglement and fixed-point bifurcations
Hines, Andrew P.; McKenzie, Ross H.; Milburn, G.J.
2005-01-01
How does the classical phase-space structure for a composite system relate to the entanglement characteristics of the corresponding quantum system? We demonstrate how the entanglement in nonlinear bipartite systems can be associated with a fixed-point bifurcation in the classical dynamics. Using the example of coupled giant spins we show that when a fixed point undergoes a supercritical pitchfork bifurcation, the corresponding quantum state--the ground state--achieves its maximum amount of entanglement near the critical point. We conjecture that this will be a generic feature of systems whose classical limit exhibits such a bifurcation
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Biomedical sensor design using analog compressed sensing
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.
Detecting analogical resemblance without retrieving the source analogy.
Kostic, Bogdan; Cleary, Anne M; Severin, Kaye; Miller, Samuel W
2010-06-01
We examined whether people can detect analogical resemblance to an earlier experimental episode without being able to recall the experimental source of the analogical resemblance. We used four-word analogies (e.g., robin-nest/beaver-dam), in a variation of the recognition-without-cued-recall method (Cleary, 2004). Participants studied word pairs (e.g., robin-nest) and were shown new word pairs at test, half of which analogically related to studied word pairs (e.g., beaver-dam) and half of which did not. For each test pair, participants first attempted to recall an analogically similar pair from the study list. Then, regardless of whether successful recall occurred, participants were prompted to rate the familiarity of the test pair, which was said to indicate the likelihood that a pair that was analogically similar to the test pair had been studied. Across three experiments, participants demonstrated an ability to detect analogical resemblance without recalling the source analogy. Findings are discussed in terms of their potential relevance to the study of analogical reasoning and insight, as well as to the study of familiarity and recognition memory.
Ochoa, Agustin
2016-01-01
This book describes a consistent and direct methodology to the analysis and design of analog circuits with particular application to circuits containing feedback. The analysis and design of circuits containing feedback is generally presented by either following a series of examples where each circuit is simplified through the use of insight or experience (someone else’s), or a complete nodal-matrix analysis generating lots of algebra. Neither of these approaches leads to gaining insight into the design process easily. The author develops a systematic approach to circuit analysis, the Driving Point Impedance and Signal Flow Graphs (DPI/SFG) method that does not require a-priori insight to the circuit being considered and results in factored analysis supporting the design function. This approach enables designers to account fully for loading and the bi-directional nature of elements both in the feedback path and in the amplifier itself, properties many times assumed negligible and ignored. Feedback circuits a...
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Beginning analog electronics through projects
Singmin, Andrew
2001-01-01
Analog electronics is the simplest way to start a fun, informative, learning program. Beginning Analog Electronics Through Projects, Second Edition was written with the needs of beginning hobbyists and students in mind. This revision of Andrew Singmin's popular Beginning Electronics Through Projects provides practical exercises, building techniques, and ideas for useful electronics projects. Additionally, it features new material on analog and digital electronics, and new projects for troubleshooting test equipment.Published in the tradition of Beginning Electronics Through Projects an
J.W. de Bakker (Jaco)
1975-01-01
textabstractParameter mechanisms for recursive procedures are investigated. Contrary to the view of Manna et al., it is argued that both call-by-value and call-by-name mechanisms yield the least fixed points of the functionals determined by the bodies of the procedures concerned. These functionals
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Sanjo Zlobec
2017-04-01
Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project
Children's Development of Analogical Reasoning: Insights from Scene Analogy Problems
Richland, Lindsey E.; Morrison, Robert G.; Holyoak, Keith J.
2006-01-01
We explored how relational complexity and featural distraction, as varied in scene analogy problems, affect children's analogical reasoning performance. Results with 3- and 4-year-olds, 6- and 7-year-olds, 9- to 11-year-olds, and 13- and 14-year-olds indicate that when children can identify the critical structural relations in a scene analogy…
The maximum number of minimal codewords in long codes
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Comparing fixed effects and covariance structure estimators for panel data
Ejrnæs, Mette; Holm, Anders
2006-01-01
In this article, the authors compare the traditional econometric fixed effect estimator with the maximum likelihood estimator implied by covariance structure models for panel data. Their findings are that the maximum like lipoid estimator is remarkably robust to certain types of misspecifications...
Optical analog transmission device
Ikawa, Shinji.
1994-01-01
The present invention concerns a device such as electro-optical conversion elements, optoelectric-electric elements and optical transmission channel, not undergoing deleterious effects on the efficiency of conversion and transmission due to temperature, and aging change. That is, a sine wave superposing means superposes, on a detector signal to be transmitted, a sine-wave signal having a predetermined amplitude and at a frequency lower than that of the detector signal. An optoelectric conversion means converts the electric signal as the signal of the sine-wave signal superposing means into an optical signal and outputs the same to an optical transmitting channel. The optoelectric conversion means converts the transmitted signal to an electric signal. A discriminating means discriminates the electric signal into a detector signal and a sine-wave signal. A calculating means calculates an optical transmitting efficiency of the transmitting channel based on the amplitude of the discriminated sine-wave signal. A processing means compensates an amplitude value of the detector signals discriminated by the discriminating means based on the optical transmission efficiency. As a result, an optical analog transmission device can be attained, which conducts optical transmission at a high accuracy without undergoing the defective effects of the optical transmission efficiency. (I.S.)
Project for a codable central unit for analog data acquisition
Bouras, F.; Da Costa Vieira, D.; Sohier, B.
1974-07-01
The instrumentation for a 256 channel codable central processor intended for an operation in connection with a computer is presented. The computer indicates the adresses of the channels to be measured, orders the conversion, and acquires the results of measurements. The acquisition and computer coupling unit is located in a standard rock CAMAC (6 U 19inch., 25 positions); an example of configuration is given. The measurement velocity depends on the converter speed and dead time of analog circuits; for a ADC 1103 converter the total dead time is 6.5s min. The analog circuits are intended for +-10V range, the accuracy is 1/2n (2n is the number of bits). The result is acquired in words of 12 bits maximum. The information transfer and analog commutation (through integrated analog gates) are discussed [fr
Conjecturing via Reconceived Classical Analogy
Lee, Kyeong-Hwa; Sriraman, Bharath
2011-01-01
Analogical reasoning is believed to be an efficient means of problem solving and construction of knowledge during the search for and the analysis of new mathematical objects. However, there is growing concern that despite everyday usage, learners are unable to transfer analogical reasoning to learning situations. This study aims at facilitating…
Bonde, Lars Ole
2014-01-01
Indeholder underkapitlerne: 2.5.1 Musik som analogi 2.5.2 Musik som metafor 2.5.3 Musikkens psykologiske funktioner - en taxonomi og metaforisk lytning til fire baroksatser......Indeholder underkapitlerne: 2.5.1 Musik som analogi 2.5.2 Musik som metafor 2.5.3 Musikkens psykologiske funktioner - en taxonomi og metaforisk lytning til fire baroksatser...
Drawing Analogies in Environmental Education
Affifi, Ramsey
2014-01-01
Reconsidering the origin, process, and outcomes of analogy-making suggests practices for environmental educators who strive to disengage humans from the isolating illusions of dichotomizing frameworks. We can view analogies as outcomes of developmental processes within which human subjectivity is but an element, threading our sense of self back…
Jensen, Thessa; Westberg, Lysa
, and imbalances of power between scholars and journalists on one side, and fans on the other are not rare occurrences. An analysis of a number of recent news articles, scholarly works, and websites, shows how the attempt of fixing fandom still prevails. Like Said's view on how the Orient is treated, fandom...... and tween website", 'Teen' managed to outrage fans. It took days and hundreds of comments, tweets, and mails to the publishers, before the article was taken down. Vilification in scholarly works and the media may have significantly lessened in recent years. Still, misunderstandings, applied exoticism...... is similarly exotisised, incorporated, and fixed. Scholars explain how to become better fans, attempting authority over fandom by applying rules to a culture, which already has their own. This, the notion of the 'better fan', devalues the existing discourses, rules, and traditions within fandom. The expert...
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Pelgrom, Marcel J M
2010-01-01
The design of an analog-to-digital converter or digital-to-analog converter is one of the most fascinating tasks in micro-electronics. In a converter the analog world with all its intricacies meets the realm of the formal digital abstraction. Both disciplines must be understood for an optimum conversion solution. In a converter also system challenges meet technology opportunities. Modern systems rely on analog-to-digital converters as an essential part of the complex chain to access the physical world. And processors need the ultimate performance of digital-to-analog converters to present the results of their complex algorithms. The same progress in CMOS technology that enables these VLSI digital systems creates new challenges for analog-to-digital converters: lower signal swings, less power and variability issues. Last but not least, the analog-to-digital converter must follow the cost reduction trend. These changing boundary conditions require micro-electronics engineers to consider their design choices for...
Analog fourier transform channelizer and OFDM receiver
2007-01-01
An OFDM receiver having an analog multiplier based I-Q channelizing filter, samples and holds consecutive analog I-Q samples of an I-Q baseband, the I-Q basebands having OFDM sub-channels. A lattice of analog I-Q multipliers and analog I-Q summers concurrently receives the held analog I-Q samples, performs analog I-Q multiplications and analog I-Q additions to concurrently generate a plurality of analog I-Q output signals, representing an N-point discrete Fourier transform of the held analog ...
Molecular modeling of fentanyl analogs
LJILJANA DOSEN-MICOVIC
2004-11-01
Full Text Available Fentanyl is a highly potent and clinically widely used narcotic analgesic. A large number of its analogs have been synthesized, some of which (sufentanil and alfentanyl are also in clinical use. Theoretical studies, in recent years, afforded a better understanding of the structure-activity relationships of this class of opiates and allowed insight into the molecular mechanism of the interactions of fentanyl analogs with their receptors. An overview of the current computational techniques for modeling fentanyl analogs, their receptors and ligand-receptor interactions is presented in this paper.
Pazos, Gonzalo; Rivadulla, Marcos L; Pérez-García, Xenxo; Gandara, Zoila; Pérez, Manuel
2014-01-01
The Gemini analogs are the last significant contribution to the family of vitamin D derivatives in medicine, for the treatment of cancer. The first Gemini analog was characterized by two symmetric side chains at C-20. Following numerous modifications, the most active analog bears a C-23-triple bond, C-26, 27- hexafluoro substituents on one side chain and a terminal trideuteromethylhydroxy group on the other side chain. This progression was possible due to improvements in the synthetic methods for the preparation of these derivatives, which allowed for increasing molecular complexity and complete diastereoselective control at C-20 and the substituted sidechains.
Kain, V; Cettour-Cave, S; Cornelis, K; Fraser, M A; Gatignon, L; Goddard, B; Velotti, F
2017-01-01
The CERN SPS (Super Proton Synchrotron) serves asLHC injector and provides beam for the North Area fixedtarget experiments. At low energy, the vertical acceptancebecomes critical with high intensity large emittance fixed tar-get beams. Optimizing the vertical available aperture is a keyingredient to optimize transmission and reduce activationaround the ring. During the 2016 run a tool was developed toprovide an automated local aperture scan around the entirering.The flux of particles slow extracted with the1/3inte-ger resonance from the Super Proton Synchrotron at CERNshould ideally be constant over the length of the extractionplateau, for optimum use of the beam by the fixed target ex-periments in the North Area. The extracted intensity is con-trolled in feed-forward correction of the horizontal tune viathe main SPS quadrupoles. The Mains power supply noiseat 50 Hz and harmonics is also corrected in feed-forwardby small amplitude tune modulation at the respective fre-quencies with a dedicated additional quad...
Analog filters in nanometer CMOS
Uhrmann, Heimo; Zimmermann, Horst
2014-01-01
Starting from the basics of analog filters and the poor transistor characteristics in nanometer CMOS 10 high-performance analog filters developed by the authors in 120 nm and 65 nm CMOS are described extensively. Among them are gm-C filters, current-mode filters, and active filters for system-on-chip realization for Bluetooth, WCDMA, UWB, DVB-H, and LTE applications. For the active filters several operational amplifier designs are described. The book, furthermore, contains a review of the newest state of research on low-voltage low-power analog filters. To cover the topic of the book comprehensively, linearization issues and measurement methods for the characterization of advanced analog filters are introduced in addition. Numerous elaborate illustrations promote an easy comprehension. This book will be of value to engineers and researchers in industry as well as scientists and Ph.D students at universities. The book is also recommendable to graduate students specializing on nanoelectronics, microelectronics ...
Analog elements for transuranic chemistries
Weimer, W.C.
1982-01-01
The analytical technique for measuring trace concentrations of the analog rare earth elements has been refined for optimal detection. The technique has been used to determine the rare earth concentrations in a series of geological and biological materials, including samples harvested from controlled lysimeter investigations. These studies have demonstrated that any of the trivalent rare earth elements may be used as analog elements for the trivalent transuranics, americium and curium
CMOS Analog IC Design: Fundamentals
Bruun, Erik
2018-01-01
This book is intended for use as the main textbook for an introductory course in CMOS analog integrated circuit design. It is aimed at electronics engineering students who have followed basic courses in mathematics, physics, circuit theory, electronics and signal processing. It takes the students directly from a basic level to a level where they can start working on simple analog IC design projects or continue their studies using more advanced textbooks in the field. A distinct feature of thi...
Analogical proportions: another logical view
Prade, Henri; Richard, Gilles
This paper investigates the logical formalization of a restricted form of analogical reasoning based on analogical proportions, i.e. statements of the form a is to b as c is to d. Starting from a naive set theoretic interpretation, we highlight the existence of two noticeable companion proportions: one states that a is to b the converse of what c is to d (reverse analogy), while the other called paralogical proportion expresses that what a and b have in common, c and d have it also. We identify the characteristic postulates of the three types of proportions and examine their consequences from an abstract viewpoint. We further study the properties of the set theoretic interpretation and of the Boolean logic interpretation, and we provide another light on the understanding of the role of permutations in the modeling of the three types of proportions. Finally, we address the use of these proportions as a basis for inference in a propositional setting, and relate it to more general schemes of analogical reasoning. The differences between analogy, reverse-analogy, and paralogy is still emphasized in a three-valued setting, which is also briefly presented.
Durant, B.W.; Schonberner, M.J.
1999-01-01
A series of brief notes were included with this presentation which highlighted certain aspects of contract management. Several petroleum companies have realized the benefits of taking advantage of contract personnel to control fixed G and A, manage the impacts on their organization, contain costs, to manage termination costs, and to fill gaps in lean personnel rosters. An independent contractor was described as being someone who is self employed, often with a variety of work experiences. The tax benefits and flexibility of contractor personnel were also described. Some liability aspects of hiring an independent contractor were also reviewed. The courts have developed the following 4 tests to help determine whether an individual is an employee or an independent contractor: (1) the control test, (2) the business integration test, (3) specific result test, and (4) the economic reality test
Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio
2015-12-01
Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.
Producing and Recognizing Analogical Relations
Lipkens, Regina; Hayes, Steven C
2009-01-01
Analogical reasoning is an important component of intelligent behavior, and a key test of any approach to human language and cognition. Only a limited amount of empirical work has been conducted from a behavior analytic point of view, most of that within Relational Frame Theory (RFT), which views analogy as a matter of deriving relations among relations. The present series of four studies expands previous work by exploring the applicability of this model of analogy to topography-based rather than merely selection-based responses and by extending the work into additional relations, including nonsymmetrical ones. In each of the four studies participants pretrained in contextual control over nonarbitrary stimulus relations of sameness and opposition, or of sameness, smaller than, and larger than, learned arbitrary stimulus relations in the presence of these relational cues and derived analogies involving directly trained relations and derived relations of mutual and combinatorial entailment, measured using a variety of productive and selection-based measures. In Experiment 1 participants successfully recognized analogies among stimulus networks containing same and opposite relations; in Experiment 2 analogy was successfully used to extend derived relations to pairs of novel stimuli; in Experiment 3 the procedure used in Experiment 1 was extended to nonsymmetrical comparative relations; in Experiment 4 the procedure used in Experiment 2 was extended to nonsymmetrical comparative relations. Although not every participant showed the effects predicted, overall the procedures occasioned relational responses consistent with an RFT account that have not yet been demonstrated in a behavior-analytic laboratory setting, including productive responding on the basis of analogies. PMID:19230515
Fast multichannel analog storage system
Freytag, D.R.
1982-11-01
A Multichannel Analog Storage System based on a commercial 32-channel parallel in/serial out (PISO) analog shift register is described. The basic unit is a single width CAMAC module containing 512 analog cells and the associated logic for data storage and subsequent readout. At sampling rates of up to 30 MHz the signals are strobed directly into the PISO. At higher rates signals are strobed into a fast presampling stage and subsequently transferred in block form into an array of PISO's. Sampling rates of 300 MHz have been achieved with the present device and 1000 MHz are possible with improved signal drivers. The system is well suited for simultaneous handling of many signal channels with moderate numbers of samples in each channel. RMS noise over full scale signal has been measured as 1:3000 (approx. = 11 bit). However, nonlinearities in the response and differences in sensitivity of the analog cells require an elaborate calibration system in order to realize 11 bit accuracy for the analog information
Test signal generation for analog circuits
B. Burdiek
2003-01-01
Full Text Available In this paper a new test signal generation approach for general analog circuits based on the variational calculus and modern control theory methods is presented. The computed transient test signals also called test stimuli are optimal with respect to the detection of a given fault set by means of a predefined merit functional representing a fault detection criterion. The test signal generation problem of finding optimal test stimuli detecting all faults form the fault set is formulated as an optimal control problem. The solution of the optimal control problem representing the test stimuli is computed using an optimization procedure. The optimization procedure is based on the necessary conditions for optimality like the maximum principle of Pontryagin and adjoint circuit equations.
Analog electronics for radiation detection
2016-01-01
Analog Electronics for Radiation Detection showcases the latest advances in readout electronics for particle, or radiation, detectors. Featuring chapters written by international experts in their respective fields, this authoritative text: Defines the main design parameters of front-end circuitry developed in microelectronics technologies Explains the basis for the use of complementary metal oxide semiconductor (CMOS) image sensors for the detection of charged particles and other non-consumer applications Delivers an in-depth review of analog-to-digital converters (ADCs), evaluating the pros and cons of ADCs integrated at the pixel, column, and per-chip levels Describes incremental sigma delta ADCs, time-to-digital converter (TDC) architectures, and digital pulse-processing techniques complementary to analog processing Examines the fundamental parameters and front-end types associated with silicon photomultipliers used for single visible-light photon detection Discusses pixel sensors ...
Natural analogs for Yucca Mountain
Murphy, W.M.
1995-01-01
High-level radioactive waste in the US, spent fuels from commercial reactors and nuclear materials generated by defense activities, will remain potentially hazardous for thousands of years. Demonstrable long-term stability of certain geologic and geochemical systems motivates and sustains the concept that high-level waste can be safely isolated in geologic repositories for requisite periods of time. Each geologic repository is unique in its properties and performance with reguard to isolation of nuclear wastes. Studies of processes analogous to waste-form alteration and radioelement transport in environments analogous to Yucca Mountain are being conducted at two sites, described in this article to illustrate uses of natural analog data: the Nopal I uranium deposit in the Sierra Pena Blanca, Mexico, and the Akrotiri archaeological site on the island of Santorini, Greece
Toward Wireless Health Monitoring via an Analog Signal Compression-Based Biosensing Platform.
Zhao, Xueyuan; Sadhu, Vidyasagar; Le, Tuan; Pompili, Dario; Javanmard, Mehdi
2018-06-01
Wireless all-analog biosensor design for the concurrent microfluidic and physiological signal monitoring is presented in this paper. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by the analog joint source-channel coding (AJSCC). Two circuit designs are discussed, including the stacked-voltage-controlled voltage source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of printed circuit board realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Synthetic Analogs of Phospholipid Metabolites as Antimalarials.
1979-07-01
phosphatidic acid analogs containing ether and phosphonate groups; completely non- hydrolyzable lecithin analogs containing phosphinate and ether groups...substance is a completely non- hydrolyzable analog of lecithin containing ether and phosphonate moieties instead of the normally labile carboxylic and...and also ant-i-phospholipase C (clostridial enzyme) activity. This substance Is a completely non- hydrolyzable analog of lecithin containing ether
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Analogy between gambling and measurement-based work extraction
Vinkler, Dror A.; Permuter, Haim H.; Merhav, Neri
2016-04-01
In information theory, one area of interest is gambling, where mutual information characterizes the maximal gain in wealth growth rate due to knowledge of side information; the betting strategy that achieves this maximum is named the Kelly strategy. In the field of physics, it was recently shown that mutual information can characterize the maximal amount of work that can be extracted from a single heat bath using measurement-based control protocols, i.e. using ‘information engines’. However, to the best of our knowledge, no relation between gambling and information engines has been presented before. In this paper, we briefly review the two concepts and then demonstrate an analogy between gambling, where bits are converted into wealth, and information engines, where bits representing measurements are converted into energy. From this analogy follows an extension of gambling to the continuous-valued case, which is shown to be useful for investments in currency exchange rates or in the stock market using options. Moreover, the analogy enables us to use well-known methods and results from one field to solve problems in the other. We present three such cases: maximum work extraction when the probability distributions governing the system and measurements are unknown, work extraction when some energy is lost in each cycle, e.g. due to friction, and an analysis of systems with memory. In all three cases, the analogy enables us to use known results in order to obtain new ones.
Transistor analogs of emergent iono-neuronal dynamics.
Rachmuth, Guy; Poon, Chi-Sang
2008-06-01
Neuromorphic analog metal-oxide-silicon (MOS) transistor circuits promise compact, low-power, and high-speed emulations of iono-neuronal dynamics orders-of-magnitude faster than digital simulation. However, their inherently limited input voltage dynamic range vs power consumption and silicon die area tradeoffs makes them highly sensitive to transistor mismatch due to fabrication inaccuracy, device noise, and other nonidealities. This limitation precludes robust analog very-large-scale-integration (aVLSI) circuits implementation of emergent iono-neuronal dynamics computations beyond simple spiking with limited ion channel dynamics. Here we present versatile neuromorphic analog building-block circuits that afford near-maximum voltage dynamic range operating within the low-power MOS transistor weak-inversion regime which is ideal for aVLSI implementation or implantable biomimetic device applications. The fabricated microchip allowed robust realization of dynamic iono-neuronal computations such as coincidence detection of presynaptic spikes or pre- and postsynaptic activities. As a critical performance benchmark, the high-speed and highly interactive iono-neuronal simulation capability on-chip enabled our prompt discovery of a minimal model of chaotic pacemaker bursting, an emergent iono-neuronal behavior of fundamental biological significance which has hitherto defied experimental testing or computational exploration via conventional digital or analog simulations. These compact and power-efficient transistor analogs of emergent iono-neuronal dynamics open new avenues for next-generation neuromorphic, neuroprosthetic, and brain-machine interface applications.
Multichannel analog temperature sensing system
Gribble, R.
1985-08-01
A multichannel system that protects the numerous and costly water-cooled magnet coils on the translation section of the FRX-C/T magnetic fusion experiment is described. The system comprises a thermistor for each coil, a constant current circuit for each thermistor, and a multichannel analog-to-digital converter interfaced to the computer
49205 ANALOGE OG DIGITALE FILTRE
Gaunholt, Hans
1997-01-01
Theese lecture notes treats the fundamental theory and the most commonly used design methods for passive- active and digital filters with special emphasis on microelectronic realizations. The lecture notes covers 75% of the material taught in the course 49205 Analog and Digital Filters...
Drawing Analogies to Deepen Learning
Fava, Michelle
2017-01-01
This article offers examples of how drawing can facilitate thinking skills that promote analogical reasoning to enable deeper learning. The instructional design applies cognitive principles, briefly described here. The workshops were developed iteratively, through feedback from student and teacher participants. Elements of the UK National…
Varieties of noise: analogical reasoning in synthetic biology.
Knuuttila, Tarja; Loettgers, Andrea
2014-12-01
The picture of synthetic biology as a kind of engineering science has largely created the public understanding of this novel field, covering both its promises and risks. In this paper, we will argue that the actual situation is more nuanced and complex. Synthetic biology is a highly interdisciplinary field of research located at the interface of physics, chemistry, biology, and computational science. All of these fields provide concepts, metaphors, mathematical tools, and models, which are typically utilized by synthetic biologists by drawing analogies between the different fields of inquiry. We will study analogical reasoning in synthetic biology through the emergence of the functional meaning of noise, which marks an important shift in how engineering concepts are employed in this field. The notion of noise serves also to highlight the differences between the two branches of synthetic biology: the basic science-oriented branch and the engineering-oriented branch, which differ from each other in the way they draw analogies to various other fields of study. Moreover, we show that fixing the mapping between a source domain and the target domain seems not to be the goal of analogical reasoning in actual scientific practice.
Bayesian analogy with relational transformations.
Lu, Hongjing; Chen, Dawn; Holyoak, Keith J
2012-07-01
How can humans acquire relational representations that enable analogical inference and other forms of high-level reasoning? Using comparative relations as a model domain, we explore the possibility that bottom-up learning mechanisms applied to objects coded as feature vectors can yield representations of relations sufficient to solve analogy problems. We introduce Bayesian analogy with relational transformations (BART) and apply the model to the task of learning first-order comparative relations (e.g., larger, smaller, fiercer, meeker) from a set of animal pairs. Inputs are coded by vectors of continuous-valued features, based either on human magnitude ratings, normed feature ratings (De Deyne et al., 2008), or outputs of the topics model (Griffiths, Steyvers, & Tenenbaum, 2007). Bootstrapping from empirical priors, the model is able to induce first-order relations represented as probabilistic weight distributions, even when given positive examples only. These learned representations allow classification of novel instantiations of the relations and yield a symbolic distance effect of the sort obtained with both humans and other primates. BART then transforms its learned weight distributions by importance-guided mapping, thereby placing distinct dimensions into correspondence. These transformed representations allow BART to reliably solve 4-term analogies (e.g., larger:smaller::fiercer:meeker), a type of reasoning that is arguably specific to humans. Our results provide a proof-of-concept that structured analogies can be solved with representations induced from unstructured feature vectors by mechanisms that operate in a largely bottom-up fashion. We discuss potential implications for algorithmic and neural models of relational thinking, as well as for the evolution of abstract thought. Copyright 2012 APA, all rights reserved.
Crows spontaneously exhibit analogical reasoning.
Smirnova, Anna; Zorina, Zoya; Obozova, Tanya; Wasserman, Edward
2015-01-19
Analogical reasoning is vital to advanced cognition and behavioral adaptation. Many theorists deem analogical thinking to be uniquely human and to be foundational to categorization, creative problem solving, and scientific discovery. Comparative psychologists have long been interested in the species generality of analogical reasoning, but they initially found it difficult to obtain empirical support for such thinking in nonhuman animals (for pioneering efforts, see [2, 3]). Researchers have since mustered considerable evidence and argument that relational matching-to-sample (RMTS) effectively captures the essence of analogy, in which the relevant logical arguments are presented visually. In RMTS, choice of test pair BB would be correct if the sample pair were AA, whereas choice of test pair EF would be correct if the sample pair were CD. Critically, no items in the correct test pair physically match items in the sample pair, thus demanding that only relational sameness or differentness is available to support accurate choice responding. Initial evidence suggested that only humans and apes can successfully learn RMTS with pairs of sample and test items; however, monkeys have subsequently done so. Here, we report that crows too exhibit relational matching behavior. Even more importantly, crows spontaneously display relational responding without ever having been trained on RMTS; they had only been trained on identity matching-to-sample (IMTS). Such robust and uninstructed relational matching behavior represents the most convincing evidence yet of analogical reasoning in a nonprimate species, as apes alone have spontaneously exhibited RMTS behavior after only IMTS training. Copyright © 2015 Elsevier Ltd. All rights reserved.
A subjective supply–demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W; Sładkowski, Jan
2009-01-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Fixed points of quantum gravity
Litim, D F
2003-01-01
Euclidean quantum gravity is studied with renormalisation group methods. Analytical results for a non-trivial ultraviolet fixed point are found for arbitrary dimensions and gauge fixing parameter in the Einstein-Hilbert truncation. Implications for quantum gravity in four dimensions are discussed.
Bi-directional Reflectance of Icy Surface Analogs: A Dual Approach
Quinones, Juan Manuel; Vides, Christina; Nelson, Robert M.; Boryta, Mark; Mannat, Ken s.
2018-01-01
Bi-directional reflectance measurements of analogs for planetary regolith have provided insight into the surface properties of planetary satellites and small bodies. Because Aluminum Oxide (Al2O3) and water ice share a similar hexagonal crystalline structure, the former has been used in laboratory experiments to simulate the regolith of both icy and dusty planetary bodies. By measuring various sizes of well sorted size fractions of Al2O3, the reflectance phase curve and porosity of a planetary regolith can be determined. We have designed an experiment to test the laboratory measurements produced by Nelson et al. (2000). Additionally, we made reflectance measurements for other alkali-halide compounds that could be used for applications beyond astronomy and planetary science.In order to provide an independent check on the Nelson et al. data, we designed an instrument with a different configuration. While both instruments take bidirectional reflectance measurements, our instrument, the Rigid Photometric Goniometer (RPG), is fixed at a phase angle of 5° and detects the scattered light with a photomultiplier tube (PMT). The PMT current is then measured with an electrometer. Following the example of Nelson et al., we measured the bidirectional reflectance of Al2O3 particulate size fractions between 0.1sizes from 20size that provided optimal, or maximum, reflectance for each compound. Our conclusions bring confirmation and clarity to photometric sciences.
Anon.
1993-03-15
Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become visible for the first
Analog circuit design art, science and personalities
Williams, Jim
1991-01-01
This book is far more than just another tutorial or reference guide - it's a tour through the world of analog design, combining theory and applications with the philosophies behind the design process. Readers will learn how leading analog circuit designers approach problems and how they think about solutions to those problems. They'll also learn about the `analog way' - a broad, flexible method of thinking about analog design tasks.A comprehensive and useful guide to analog theory and applications. Covers visualizing the operation of analog circuits. Looks at how to rap
Pelgrom, Marcel
2017-01-01
This textbook is appropriate for use in graduate-level curricula in analog-to-digital conversion, as well as for practicing engineers in need of a state-of-the-art reference on data converters. It discusses various analog-to-digital conversion principles, including sampling, quantization, reference generation, nyquist architectures and sigma-delta modulation. This book presents an overview of the state of the art in this field and focuses on issues of optimizing accuracy and speed, while reducing the power level. This new, third edition emphasizes novel calibration concepts, the specific requirements of new systems, the consequences of 22-nm technology and the need for a more statistical approach to accuracy. Pedagogical enhancements to this edition include additional, new exercises, solved examples to introduce all key, new concepts and warnings, remarks and hints, from a practitioner’s perspective, wherever appropriate. Considerable background information and practical tips, from designing a PCB, to lay-o...
Analogies between antiferromagnets and antiferroelectrics
Enz, C.P.; Matthias, B.T.
1980-01-01
Ferro- and antiferromagnetism in the Laves phase TiBesub(2-x) Cusub(x) occurs for 0.1 4 H 2 PO 4 and its solid solutions with TlH 2 PO 4 and with the ferroelectric KH 2 PO 4 are discussed as function of deuteration and of pressure. Another analogy as function of pressure is established with the antiferroelectric perovskite PbZrO 3 . (author)
Novel phosphanucleoside analogs of dideoxynucleosides
Páv, Ondřej; Buděšínský, Miloš; Rosenberg, Ivan
2017-01-01
Roč. 73, č. 34 (2017), s. 5220-5228 ISSN 0040-4020 R&D Projects: GA ČR(CZ) GA17-12703S; GA ČR GA13-26526S; GA MZd NV15-31604A Institutional support: RVO:61388963 Keywords : phosphanucleoside * nucleoside analog * ring-closing metathesis * stereoselective hydroboration * chiral resolution Subject RIV: CC - Organic Chemistry OBOR OECD: Organic chemistry Impact factor: 2.651, year: 2016
Anon.
1993-01-01
Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become
Electrostatic analogy for symmetron gravity
Ogden, Lillie; Brown, Katherine; Mathur, Harsh; Rovelli, Kevin
2017-12-01
The symmetron model is a scalar-tensor theory of gravity with a screening mechanism that suppresses the effect of the symmetron field at high densities characteristic of the Solar System and laboratory scales but allows it to act with gravitational strength at low density on the cosmological scale. We elucidate the screening mechanism by showing that in the quasistatic Newtonian limit there are precise analogies between symmetron gravity and electrostatics for both strong and weak screening. For strong screening we find that large dense bodies behave in a manner analogous to perfect conductors in electrostatics. Based on this analogy we find that the symmetron field exhibits a lightning rod effect wherein the field gradients are enhanced near the ends of pointed or elongated objects. An ellipsoid placed in a uniform symmetron gradient is shown to experience a torque. By symmetry there is no gravitational torque in this case. Hence this effect unmasks the symmetron and might serve as the basis for future laboratory experiments. The symmetron force between a point mass and a large dense body includes a component corresponding to the interaction of the point mass with its image in the larger body. None of these effects have counterparts in the Newtonian limit of Einstein gravity. We discuss the similarities between symmetron gravity and the chameleon model as well as the differences between the two.
The Development of Analogical Reasoning Processes.
Sternberg, Robert J.; Rifkin, Bathsheva
1979-01-01
Two experiments were conducted to test the generalizability to children of a theory of analogical reasoning processes, originally proposed for adults, and to examine the development of analogical reasoning processes in terms of five proposed sources of cognitive development. (MP)
16-channel analog store and multiplexer unit
Brossard, M; Kulka, Z [Clermont-Ferrand-2 Univ., 63 - Aubiere (France). Lab. de Physique Corpusculaire
1979-03-15
A 16-channel analog store and multiplexer unit is described. The unit enables storing and selection of analog information which is then digitally encoded by single ADC. This solution becomes economically attractive particularly in multidetector pulse height analysis systems.
National Radiological Fixed Lab Data
U.S. Environmental Protection Agency — The National Radiological Fixed Laboratory Data Asset includes data produced in support of various clients such as other EPA offices, EPA Regional programs, DOE,...
Can mushrooms fix atmospheric nitrogen?
Unknown
Introduction. Rhizobium is a genus of symbiotic N2-fixing soil bacteria that induce ... To produce biofilm cultures, a 2 × 2 cm yeast manitol agar. (YMA) slab was .... determination of antibiotic susceptibilities of bacterial biofilms;. J. Clin. Microbiol.
Elevated Fixed Platform Test Facility
Federal Laboratory Consortium — The Elevated Fixed Platform (EFP) is a helicopter recovery test facility located at Lakehurst, NJ. It consists of a 60 by 85 foot steel and concrete deck built atop...
Atheism and Analogy: Aquinas Against the Atheists
Linford, Daniel J.
2014-01-01
In the 13th century, Thomas Aquinas developed two models for how humans may speak of God - either by the analogy of proportion or by the analogy of proportionality. Aquinas's doctrines initiated a theological debate concerning analogy that spanned several centuries. In the 18th century, there appeared two closely related arguments for atheism which both utilized analogy for their own purposes. In this thesis, I show that one argument, articulated by the French materialist Paul-Henri Thiry Bar...
Enhancing programming logic thinking using analogy mapping
Sukamto, R. A.; Megasari, R.
2018-05-01
Programming logic thinking is the most important competence for computer science students. However, programming is one of the difficult subject in computer science program. This paper reports our work about enhancing students' programming logic thinking using Analogy Mapping for basic programming subject. Analogy Mapping is a computer application which converts source code into analogies images. This research used time series evaluation and the result showed that Analogy Mapping can enhance students' programming logic thinking.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Orgill, Mary Kay; Thomas, Megan
2007-01-01
Science classes are full of abstract or challenging concepts that are easier to understand if an analogy is used to illustrate the points. Effective analogies motivate students, clarify students' thinking, help students overcome misconceptions, and give students ways to visualize abstract concepts. When they are used appropriately, analogies can…
Science Teachers' Analogical Reasoning
Mozzer, Nilmara Braga; Justi, Rosária
2013-01-01
Analogies can play a relevant role in students' learning. However, for the effective use of analogies, teachers should not only have a well-prepared repertoire of validated analogies, which could serve as bridges between the students' prior knowledge and the scientific knowledge they desire them to understand, but also know how to…
The Micro-Category Account of Analogy
Green, Adam E.; Fugelsang, Jonathan A.; Kraemer, David J. M.; Dunbar, Kevin N.
2008-01-01
Here, we investigate how activation of mental representations of categories during analogical reasoning influences subsequent cognitive processing. Specifically, we present and test the central predictions of the "Micro-Category" account of analogy. This account emphasizes the role of categories in aligning terms for analogical mapping. In a…
Further evaluation of the tropane analogs of haloperidol.
Sampson, Dinithia; Bricker, Barbara; Zhu, Xue Y; Peprah, Kwakye; Lamango, Nazarius S; Setola, Vincent; Roth, Bryan L; Ablordeppey, Seth Y
2014-09-01
Previous work from our labs has indicated that a tropane analog of haloperidol with potent D2 binding but designed to avoid the formation of MPP(+)-like metabolites, such as 4-(4-chlorophenyl)-1-(4-(4-fluorophenyl)-4-oxobutyl)pyridin-1-ium (BCPP(+)) still produced catalepsy, suggesting a strong role for the D2 receptor in the production of catalepsy in rats, and hence EPS in humans. This study tested the hypothesis that further modifications of the tropane analog to produce compounds with less potent binding to the D2 receptor than haloperidol, would produce less catalepsy. These tests have now revealed that while haloperidol produced maximum catalepsy, these compounds produced moderate to low levels of catalepsy. Compound 9, with the least binding affinity to the D2R, produced the least catalepsy and highest Minimum Adverse Effective Dose (MAED) of the analogs tested regardless of their affinities at other receptors including the 5-HT1AR. These observations support the hypothesis that moderation of the D2 binding of the tropane analogs could reduce catalepsy potential in rats and consequently EPS in man. Published by Elsevier Ltd.
Chrysikou, Evangelia G; Thompson-Schill, Sharon L
2010-06-01
Abstract The proposed theory can account for analogies based on learned relationships between elements in the source and target domains. However, its explanatory power regarding the discovery of new relationships during analogical reasoning is limited. We offer an alternative perspective for the role of PFC in analogical thought that may better address different types of analogical mappings.
Practical analog electronics for technicians
Kimber, W A
2013-01-01
'Practical Analog Electronics for Technicians' not only provides an accessible introduction to electronics, but also supplies all the problems and practical activities needed to gain hands-on knowledge and experience. This emphasis on practice is surprisingly unusual in electronics texts, and has already gained Will Kimber popularity through the companion volume, 'Practical Digital Electronics for Technicians'. Written to cover the Advanced GNVQ optional unit in electronics, this book is also ideal for BTEC National, A-level electronics and City & Guilds courses. Together with 'Practical Digit
Resistive RAMs as analog trimming elements
Aziza, H.; Perez, A.; Portal, J. M.
2018-04-01
This work investigates the use of Resistive Random Access Memory (RRAM) as an analog trimming device. The analog storage feature of the RRAM cell is evaluated and the ability of the RRAM to hold several resistance states is exploited to propose analog trim elements. To modulate the memory cell resistance, a series of short programming pulses are applied across the RRAM cell allowing a fine calibration of the RRAM resistance. The RRAM non volatility feature makes the analog device powers up already calibrated for the system in which the analog trimmed structure is embedded. To validate the concept, a test structure consisting of a voltage reference is evaluated.
Analog and mixed-signal electronics
Stephan, Karl
2015-01-01
A practical guide to analog and mixed-signal electronics, with an emphasis on design problems and applications This book provides an in-depth coverage of essential analog and mixed-signal topics such as power amplifiers, active filters, noise and dynamic range, analog-to-digital and digital-to-analog conversion techniques, phase-locked loops, and switching power supplies. Readers will learn the basics of linear systems, types of nonlinearities and their effects, op-amp circuits, the high-gain analog filter-amplifier, and signal generation. The author uses system design examples to motivate
Analog circuit design art, science, and personalities
Williams, Jim
1991-01-01
Analog Circuit Design: Art, Science, and Personalities discusses the many approaches and styles in the practice of analog circuit design. The book is written in an informal yet informative manner, making it easily understandable to those new in the field. The selection covers the definition, history, current practice, and future direction of analog design; the practice proper; and the styles in analog circuit design. The book also includes the problems usually encountered in analog circuit design; approach to feedback loop design; and other different techniques and applications. The text is
Statistics of the first passage time of Brownian motion conditioned by maximum value or area
Kearney, Michael J; Majumdar, Satya N
2014-01-01
We derive the moments of the first passage time for Brownian motion conditioned by either the maximum value or the area swept out by the motion. These quantities are the natural counterparts to the moments of the maximum value and area of Brownian excursions of fixed duration, which we also derive for completeness within the same mathematical framework. Various applications are indicated. (paper)
Maximum stellar iron core mass
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
Neutron spectra unfolding with maximum entropy and maximum likelihood
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Analogical reasoning in schizophrenic delusions.
Simpson, Jane; Done, D John
2004-09-01
Reasoning ability has often been argued to be impaired in people with schizophrenic delusions, although evidence for this is far from convincing. This experiment examined the analogical reasoning abilities of several groups of patients, including non-deluded and deluded schizophrenics, to test the hypothesis that performance by the deluded schizophrenic group would be impaired. Eleven deluded schizophrenics, 10 depressed subjects, seven non-deluded schizophrenics and 16 matched non-psychiatric controls, who were matched on a number of key variables, were asked to solve an analogical reasoning task. Performance by the deluded schizophrenic group was certainly impaired when compared with the depressed and non-psychiatric control groups though less convincingly so when compared with the non-deluded schizophrenic group. The impairment shown by the deluded schizophrenic group seemed to occur at the initial stage of the reasoning task. The particular type of impairment shown by the deluded subjects was assessed in relation to other cognitive problems already researched and the implications of these problems on reasoning tasks and theories of delusions was discussed.
Reliability of analog quantum simulation
Sarovar, Mohan [Sandia National Laboratories, Digital and Quantum Information Systems, Livermore, CA (United States); Zhang, Jun; Zeng, Lishan [Shanghai Jiao Tong University, Joint Institute of UMich-SJTU, Key Laboratory of System Control and Information Processing (MOE), Shanghai (China)
2017-12-15
Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. Specifically, how do we know whether an analog simulation of a quantum model will produce predictions that agree with the ideal model in the presence of inevitable imperfections? At the same time there is a widely held expectation that certain quantum simulation questions will be robust to errors and perturbations in the underlying hardware. Resolving these two points of view is a critical step in making the most of this promising technology. In this work we formalize the notion of AQS reliability by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach naturally reveals the importance of model symmetries in dictating the robust properties. To demonstrate the approach, we characterize the robust features of a variety of quantum many-body models. (orig.)
Pelgrom, Marcel J. M
2013-01-01
This textbook is appropriate for use in graduate-level curricula in analog to digital conversion, as well as for practicing engineers in need of a state-of-the-art reference on data converters. It discusses various analog-to-digital conversion principles, including sampling, quantization, reference generation, nyquist architectures and sigma-delta modulation. This book presents an overview of the state-of-the-art in this field and focuses on issues of optimizing accuracy and speed, while reducing the power level. This new, second edition emphasizes novel calibration concepts, the specific requirements of new systems, the consequences of 45-nm technology and the need for a more statistical approach to accuracy. Pedagogical enhancements to this edition include more than twice the exercises available in the first edition, solved examples to introduce all key, new concepts and warnings, remarks and hints, from a practitioner’s perspective, wherever appropriate. Considerable background information and pr...
Frozen orbit realization using LQR analogy
Nagarajan, N.; Rayan, H. Reno
In the case of remote sensing orbits, the Frozen Orbit concept minimizes altitude variations over a given region using passive means. This is achieved by establishing the mean eccentricity vector at the orbital poles i.e., by fixing the mean argument of perigee at 90 deg with an appropriate eccentricity to balance the perturbations due to zonal harmonics J2 and J3 of the Earth's potential. Eccentricity vector is a vector whose magnitude is the eccentricity and direction is the argument of perigee. The launcher dispersions result in an eccentricity vector which is away from the frozen orbit values. The objective is then to formulate an orbit maneuver strategy to optimize the fuel required to achieve the frozen orbit in the presence of visibility and impulse constraints. It is shown that the motion of the eccentricity vector around the frozen perigee can be approximated as a circle. Combining the circular motion of the eccentricity vector around the frozen point and the maneuver equation, the following discrete equation is obtained. X(k+1) = AX(k) + Bu(k), where X is the state (i.e. eccentricity vector components), A the state transition matrix, u the scalar control force (i.e. dV in this case) and B the control matrix which transforms dV into eccentricity vector change. Based on this, it is shown that the problem of optimizing the fuel can be treated as a Linear Quadratic Regulator (LQR) problem in which the maneuver can be solved by using control system design tools like MATLAB by deriving an analogy LQR design.
Fixed points of quantum operations
Arias, A.; Gheondea, A.; Gudder, S.
2002-01-01
Quantum operations frequently occur in quantum measurement theory, quantum probability, quantum computation, and quantum information theory. If an operator A is invariant under a quantum operation φ, we call A a φ-fixed point. Physically, the φ-fixed points are the operators that are not disturbed by the action of φ. Our main purpose is to answer the following question. If A is a φ-fixed point, is A compatible with the operation elements of φ? We shall show in general that the answer is no and we shall give some sufficient conditions under which the answer is yes. Our results will follow from some general theorems concerning completely positive maps and injectivity of operator systems and von Neumann algebras
Maximum Mass of Hybrid Stars in the Quark Bag Model
Alaverdyan, G. B.; Vartanyan, Yu. L.
2017-12-01
The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.
On Maximum Entropy and Inference
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Padgett, Wayne T
2009-01-01
This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory
Apparatus for fixing radioactive waste
Murphy, J.D.; Pirro, J. Jr.; Lawrence, M.; Wisla, S.F.
1975-01-01
Fixing radioactive waste is disclosed in which the waste is collected as a slurry in aqueous media in a metering tank located within the nuclear facilities. Collection of waste is continued from time to time until a sufficient quantity of material to make up a full shipment to a burial ground has been collected. The slurry is then cast in shipping containers for shipment to a burial ground or the like by metering through a mixer into which fixing materials are simultaneously metered at a rate to yield the desired proportions of materials. (U.S.)
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
Using analogical problem solving with different scaffolding supports to learn about friction
Lin, Shih-Yin; Singh, Chandralekha
2012-02-01
Prior research suggests that many students believe that the magnitude of the static frictional force is always equal to its maximum value. Here, we examine introductory students' ability to learn from analogical reasoning (with different scaffolding supports provided) between two problems that are similar in terms of the physics principle involved but one problem involves static friction, which often triggers the misleading notion. To help students process through the analogy deeply and contemplate whether the static frictional force was at its maximum value, students in different recitation classrooms received different scaffolding support. We discuss students' performance in different groups.
LCLS Maximum Credible Beam Power
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Automatic activation of categorical and abstract analogical relations in analogical reasoning.
Green, Adam E; Fugelsang, Jonathan A; Dunbar, Kevin N
2006-10-01
We examined activation of concepts during analogical reasoning. Subjects made either analogical judgments or categorical judgments about four-word sets. After each four-word set, they named the ink color of a single word in a modified Stroop task. Words that referred to category relations were primed (as indicated by longer response times on Stroop color naming) subsequent to analogical judgments and categorical judgments. This finding suggests that activation of category concepts plays a fundamental role in analogical thinking. When colored words referred to analogical relations, priming occurred subsequent to analogical judgments, but not to categorical judgments, even though identical four-word stimuli were used for both types of judgments. This finding lends empirical support to the hypothesis that, when people comprehend the analogy between two items, they activate an abstract analogical relation that is distinct from the specific content items that compose the analogy.
The Young Solar Analogs Project
Gray, Richard O.; Saken, J. M.; Corbally, C. J.; Fuller, V.; Kahvaz, Y.; Lambert, R.; Newsome, I.; Seeds, M.
2013-01-01
We are carrying out a long-term project of measuring chromospheric activity and brightness variations in 31 young solar analogs (YSAs) using facilities at the Dark Sky Observatory (DSO - Appalachian State University) and the Vatican Advanced Technology Telescope (VATT). These YSAs are solar-type (spectral types F8 - K2) stars with ages ranging from 0.3 - 1.5 Gyr. The goal of this project is to gain better understanding of the magnetic activity of the early Sun, and especially how that activity may have impacted the development of life on the Earth. This project will also yield insights into the space environments experienced by young Earth analogs. We are currently in the 6th year of spectroscopic measurements of these stars: these data include Ca II H & K chromospheric flux measurements, and narrow-band measurements in the photospheric G-band, both obtained with the G/M spectrograph on the DSO 32-inch telescope. We will present evidence of activity cycles in a number of our stars, as well as periods determined from rotational modulation of the spectroscopic indices. The relationship between the Ca II activity index and the G-band index will be explored. NSF support for our project has provided funds for the construction of a robotic photometric telescope to monitor the program stars in a 5-passband system (Strömgren-v, Johnson-Cousins B, V, and R, and a 3-nm wide Hα filter). The robotic telescope has been functional since April 2012 and observes the program stars on every clear night; combined with the Piggy-back telescope attached to the DSO 32-inch, we now have photometric observations on over 130 nights stretching over nearly 2 years. We will examine the relationships between variations in the Ca II H & K index, the G-band index and the photometric bands. This project is supported by the National Science Foundation, grant AST-1109158.
The Young Solar Analogs Project
Gray, Richard O.; Saken, J. M.; Corbally, C. J.; Seeds, M. F.; Morrison, S. S.
2012-01-01
We are carrying out a long-term project of measuring chromospheric activity and brightness variations in 31 young solar analogs (YSAs) using the Dark Sky Observatory (DSO -- Appalachian State University) 32-inch telescope and the G/M spectrograph. These YSAs are solar-type (spectral types F8 - K2) stars with ages ranging from 0.3 - 1.5 Gyr. The goal of this project is to gain better understanding of the magnetic activity of the early Sun, and especially how that activity may have impacted the development of life on the Earth. This project will also yield insights into the space environments experienced by young Earth analogs. We are currently in our 5th year of obtaining Ca II K & H chromospheric flux measurements, and are beginning to see signs of long-term activity cycles in a number of our stars. In addition, rotational modulation of the chromospheric fluxes is detectable in our data, and we have determined rotational periods for many of our stars. Short timescale increases in the K & H fluxes have been observed in a number of our stars; these events may be related to stellar flares. VATTSpec, a new moderate-resolution spectrograph on the 1.8-m Vatican Telescope in Arizona, has recently become involved with the project. This spectrograph will increase our ability to detect short-term changes in stellar activity on timescales of hours to minutes. We have been monitoring the program stars for one year in a multi-band photometric system consisting of Stromgren-v, and Johnson B, V, and R filters. We will soon add a narrow-band H-alpha filter to the system. Photometry is being carried out with a small piggy-back telescope on the 32-inch, but a robotic photometric telescope is currently being installed at DSO for this purpose. This project is supported by the National Science Foundation.
Flat Coalgebraic Fixed Point Logics
Schröder, Lutz; Venema, Yde
Fixed point logics are widely used in computer science, in particular in artificial intelligence and concurrency. The most expressive logics of this type are the μ-calculus and its relatives. However, popular fixed point logics tend to trade expressivity for simplicity and readability, and in fact often live within the single variable fragment of the μ-calculus. The family of such flat fixed point logics includes, e.g., CTL, the *-nesting-free fragment of PDL, and the logic of common knowledge. Here, we extend this notion to the generic semantic framework of coalgebraic logic, thus covering a wide range of logics beyond the standard μ-calculus including, e.g., flat fragments of the graded μ-calculus and the alternating-time μ-calculus (such as ATL), as well as probabilistic and monotone fixed point logics. Our main results are completeness of the Kozen-Park axiomatization and a timed-out tableaux method that matches ExpTime upper bounds inherited from the coalgebraic μ-calculus but avoids using automata.
Enumerating matroids of fixed rank
Pendavingh, R.; van der Pol, J.
2017-01-01
It has been conjectured that asymptotically almost all matroids are sparse paving, i.e. that~s(n)∼m(n)s(n)∼m(n), where m(n)m(n) denotes the number of matroids on a fixed groundset of size nn, and s(n)s(n) the number of sparse paving matroids. In an earlier paper, we showed that
Fixed Costs and Hours Constraints
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
Adhesives for fixed orthodontic bands.
Millett, Declan T; Glenny, Anne-Marie; Mattick, Rye Cr; Hickman, Joy; Mandall, Nicky A
2016-10-25
Orthodontic treatment involves using fixed or removable appliances (dental braces) to correct the positions of teeth. It has been shown that the quality of treatment result obtained with fixed appliances is much better than with removable appliances. Fixed appliances are, therefore, favoured by most orthodontists for treatment. The success of a fixed orthodontic appliance depends on the metal attachments (brackets and bands) being attached securely to the teeth so that they do not become loose during treatment. Brackets are usually attached to the front and side teeth, whereas bands (metal rings that go round the teeth) are more commonly used on the back teeth (molars). A number of adhesives are available to attach bands to teeth and it is important to understand which group of adhesives bond most reliably, as well as reducing or preventing dental decay during the treatment period. To evaluate the effectiveness of the adhesives used to attach bands to teeth during fixed appliance treatment, in terms of:(1) how often the bands come off during treatment; and(2) whether they protect the banded teeth against decay during fixed appliance treatment. The following electronic databases were searched: Cochrane Oral Health's Trials Register (searched 2 June 2016), Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 5) in the Cochrane Library (searched 2 June 2016), MEDLINE Ovid (1946 to 2 June 2016) and EMBASE Ovid (1980 to 2 June 2016). We searched ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. Randomised and controlled clinical trials (RCTs and CCTs) (including split-mouth studies) of adhesives used to attach orthodontic bands to molar teeth were selected. Patients with full arch fixed orthodontic appliance(s) who had bands attached to molars were included. All review authors
Priming analogical reasoning with false memories.
Howe, Mark L; Garner, Sarah R; Threadgold, Emma; Ball, Linden J
2015-08-01
Like true memories, false memories are capable of priming answers to insight-based problems. Recent research has attempted to extend this paradigm to more advanced problem-solving tasks, including those involving verbal analogical reasoning. However, these experiments are constrained inasmuch as problem solutions could be generated via spreading activation mechanisms (much like false memories themselves) rather than using complex reasoning processes. In three experiments we examined false memory priming of complex analogical reasoning tasks in the absence of simple semantic associations. In Experiment 1, we demonstrated the robustness of false memory priming in analogical reasoning when backward associative strength among the problem terms was eliminated. In Experiments 2a and 2b, we extended these findings by demonstrating priming on newly created homonym analogies that can only be solved by inhibiting semantic associations within the analogy. Overall, the findings of the present experiments provide evidence that the efficacy of false memory priming extends to complex analogical reasoning problems.
Neural correlates of creativity in analogical reasoning.
Green, Adam E; Kraemer, David J M; Fugelsang, Jonathan A; Gray, Jeremy R; Dunbar, Kevin N
2012-03-01
Brain-based evidence has implicated the frontal pole of the brain as important for analogical mapping. Separately, cognitive research has identified semantic distance as a key determinant of the creativity of analogical mapping (i.e., more distant analogies are generally more creative). Here, we used functional magnetic resonance imaging to assess brain activity during an analogy generation task in which we varied the semantic distance of analogical mapping (as derived quantitatively from a latent semantic analysis). Data indicated that activity within an a priori region of interest in left frontopolar cortex covaried parametrically with increasing semantic distance, even after removing effects of task difficulty. Results implicate increased recruitment of frontopolar cortex as a mechanism for integrating semantically distant information to generate solutions in creative analogical reasoning. 2012 APA, all rights reserved
An emergent approach to analogical inference
Thibodeau, Paul H.; Flusberg, Stephen J.; Glick, Jeremy J.; Sternberg, Daniel A.
2013-03-01
In recent years, a growing number of researchers have proposed that analogy is a core component of human cognition. According to the dominant theoretical viewpoint, analogical reasoning requires a specific suite of cognitive machinery, including explicitly coded symbolic representations and a mapping or binding mechanism that operates over these representations. Here we offer an alternative approach: we find that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery. The results also parallel findings from the developmental literature on analogy, demonstrating a shift from an initial reliance on surface feature similarity to the use of relational similarity later in training. Variants of the model allow us to consider and rule out alternative accounts of its performance. We conclude by discussing how these findings can potentially refine our understanding of the processes that are required to perform analogical inference.
Fixed Point Learning Based Intelligent Traffic Control System
Zongyao, Wang; Cong, Sui; Cheng, Shao
2017-10-01
Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.
Analog techniques in CEBAF's RF control system
Hovater, C.; Fugitt, J.
1989-01-01
Recent developments in high-speed analog technology have progressed into the areas of traditional RF technology. Diode related devices are being replaced by analog IC's in the CEBAF RF control system. Complex phase modulators and attenuators have been successfully tested at 70 MHz. They have three advantages over existing technology: lower cost, less temperature sensitivity, and more linearity. RF signal conditioning components and how to implement the new analog IC's will be covered in this paper. 4 refs., 5 figs
Analog techniques in CEBAF'S RF control system
Hovater, C.; Fugitt, J.
1989-01-01
Recent developments in high-speed analog technology have progressed into the areas of traditional rf technology. Diode-related devices are being replaced by analog IC's in the CEBAF rf control system. Complex phase modulators and attenuators have been successfully tested at 70 MHz. They have three advantages over existing technology: lower cost, less temperature sensitivity, and more linearity. Rf signal conditioning components and how to implement the new analog IC's will be covered in this paper. 4 refs., 5 figs
Design and Analysis of Reconfigurable Analog System
2011-02-01
34010010" �" �" �" �" �" �" �±" N3 N2 N± P1 P2 P3 * Current sources $RR = 1; *Ramp Rate (slope of the...2008/12/12/31e83bac-500f-4182- acca -4d360295fd9c.pdf, Analog Devices, Analog Dialogue 39-06, June 2005. [15] D. A. Johns, K. Martin "Analog Integrated
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Fermilab accelerator control system: Analog monitoring facilities
Seino, K.; Anderson, L.; Smedinghoff, J.
1987-10-01
Thousands of analog signals are monitored in different areas of the Fermilab accelerator complex. For general purposes, analog signals are sent over coaxial or twinaxial cables with varying lengths, collected at fan-in boxes and digitized with 12 bit multiplexed ADCs. For higher resolution requirements, analog signals are digitized at sources and are serially sent to the control system. This paper surveys ADC subsystems that are used with the accelerator control systems and discusses practical problems and solutions, and it describes how analog data are presented on the console system
Relations as transformations: implications for analogical reasoning.
Leech, Robert; Mareschal, Denis; Cooper, Richard P
2007-07-01
We present two experiments assessing whether the size of a transformation instantiating a relation between two states of the world (e.g., shrinks) is a performance factor affecting analogical reasoning. The first experiment finds evidence of transformation size as a significant factor in adolescent analogical problem solving while the second experiment finds a similar effect on adult analogical reasoning using a markedly different analogical completion paradigm. The results are interpreted as providing evidence for the more general framework that cognitive representations of relations are best understood as mental transformations.
Epistemology of analogy: Knowledge, society and expression
Mauricio Beuchot
2017-06-01
Full Text Available In this article we expose the bases of analog epistemology. This theory of knowledge is between an extreme subjectivism and an extreme objectivism. Analog hermeneutics is a realistic hermeneutics. She seeks the truth, but incorporates the meaning and emotion. We have separated the reason from the experience, the praxis theory, the mind or the soul of the body. We have to get them back together, if we do not get lost in the rational (which says little of the human being, or we lose ourselves in the emotional (without logical consistency. The analogical hermeneutic realism is able, thanks to the analogy itself, to mediate in this way of union.
Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie
2009-01-01
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Expert analogy use in a naturalistic setting
Kretz, Donald R.; Krawczyk, Daniel C.
2014-01-01
The use of analogy is an important component of human cognition. The type of analogy we produce and communicate depends heavily on a number of factors, such as the setting, the level of domain expertise present, and the speaker's goal or intent. In this observational study, we recorded economics experts during scientific discussion and examined the categorical distance and structural depth of the analogies they produced. We also sought to characterize the purpose of the analogies that were generated. Our results supported previous conclusions about the infrequency of superficial similarity in subject-generated analogs, but also showed that distance and depth characteristics were more evenly balanced than in previous observational studies. This finding was likely due to the nature of the goals of the participants, as well as the broader nature of their expertise. An analysis of analogical purpose indicated that the generation of concrete source examples of more general target concepts was most prevalent. We also noted frequent instances of analogies intended to form visual images of source concepts. Other common purposes for analogies were the addition of colorful speech, inclusion (i.e., subsumption) of a target into a source concept, or differentiation between source and target concepts. We found no association between depth and either of the other two characteristics, but our findings suggest a relationship between purpose and distance; i.e., that visual imagery typically entailed an outside-domain source whereas exemplification was most frequently accomplished using within-domain analogies. Overall, we observed a rich and diverse set of spontaneously produced analogical comparisons. The high degree of expertise within the observed group along with the richly comparative nature of the economics discipline likely contributed to this analogical abundance. PMID:25505437
Expert Analogy Use in a Naturalistic Setting
Donald R Kretz
2014-11-01
Full Text Available The use of analogy is an important component of human cognition. The type of analogy we produce and communicate depends heavily on a number of factors, such as the setting, the level of domain expertise present, and the speaker’s goal or intent. In this observational study, we recorded economics experts during scientific discussion and examined the categorical distance and structural depth of the analogies they produced. We also sought to characterize the purpose of the analogies that were generated. Our results supported previous conclusions about the infrequency of superficial similarity in subject-generated analogs, but also showed that distance and depth characteristics were more evenly balanced than in previous observational studies. This finding was likely due to the nature of the goals of the participants, as well as the broader nature of their expertise. An analysis of analogical purpose indicated that the generation of concrete source examples of more general target concepts was most prevalent. We also noted frequent instances of analogies intended to form visual images of source concepts. Other common purposes for analogies were the addition of colorful speech, inclusion (i.e., subsumption of a target into a source concept, or differentiation between source and target concepts. We found no association between depth and either of the other two characteristics, but our findings suggest a relationship between purpose and distance; i.e., that visual imagery typically entailed an outside-domain source whereas exemplification was most frequently accomplished using within-domain analogies. Overall, we observed a rich and diverse set of spontaneously produced analogical comparisons. The high degree of expertise within the observed group along with the richly comparative nature of the economics discipline likely contributed to this analogical abundance.
Fixed target flammable gas upgrades
Schmitt, R.; Squires, B.; Gasteyer, T.; Richardson, R.
1996-12-01
In the past, fixed target flammable gas systems were not supported in an organized fashion. The Research Division, Mechanical Support Department began to support these gas systems for the 1995 run. This technical memo describes the new approach being used to supply chamber gasses to fixed target experiments at Fermilab. It describes the engineering design features, system safety, system documentation and performance results. Gas mixtures provide the medium for electron detection in proportional and drift chambers. Usually a mixture of a noble gas and a polyatomic quenching gas is used. Sometimes a small amount of electronegative gas is added as well. The mixture required is a function of the specific chamber design, including working voltage, gain requirements, high rate capability, aging and others. For the 1995 fixed target run all the experiments requested once through gas systems. We obtained a summary of problems from the 1990 fixed target run and made a summary of the operations logbook entries from the 1991 run. These summaries primarily include problems involving flammable gas alarms, but also include incidents where Operations was involved or informed. Usually contamination issues were dealt with by the experimenters. The summaries are attached. We discussed past operational issues with the experimenters involved. There were numerous incidents of drift chamber failure where contaminated gas was suspect. However analyses of the gas at the time usually did not show any particular problems. This could have been because the analysis did not look for the troublesome component, the contaminant was concentrated in the gas over the liquid and vented before the sample was taken, or that contaminants were drawn into the chambers directly through leaks or sub-atmospheric pressures. After some study we were unable to determine specific causes of past contamination problems, although in argon-ethane systems the problems were due to the ethane only
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Shapiro, Joel H
2016-01-01
This text provides an introduction to some of the best-known fixed-point theorems, with an emphasis on their interactions with topics in analysis. The level of exposition increases gradually throughout the book, building from a basic requirement of undergraduate proficiency to graduate-level sophistication. Appendices provide an introduction to (or refresher on) some of the prerequisite material and exercises are integrated into the text, contributing to the volume’s ability to be used as a self-contained text. Readers will find the presentation especially useful for independent study or as a supplement to a graduate course in fixed-point theory. The material is split into four parts: the first introduces the Banach Contraction-Mapping Principle and the Brouwer Fixed-Point Theorem, along with a selection of interesting applications; the second focuses on Brouwer’s theorem and its application to John Nash’s work; the third applies Brouwer’s theorem to spaces of infinite dimension; and the fourth rests ...
Young Children's Analogical Reasoning in Science Domains
Haglund, Jesper; Jeppsson, Fredrik; Andersson, Johanna
2012-01-01
This exploratory study in a classroom setting investigates first graders' (age 7-8 years, N = 25) ability to perform analogical reasoning and create their own analogies for two irreversible natural phenomena: mixing and heat transfer. We found that the children who contributed actively to a full-class discussion were consistently successful at…
Children's Use of Analogy during Collaborative Reasoning
Lin, Tzu-Jung; Anderson, Richard C.; Hummel, John E.; Jadallah, May; Miller, Brian W.; Nguyen-Jahiel, Kim; Morris, Joshua A.; Kuo, Li-Jen; Kim, Il-Hee; Wu, Xiaoying; Dong, Ting
2012-01-01
This microgenetic study examined social influences on children's development of analogical reasoning during peer-led small-group discussions of stories about controversial issues. A total of 277 analogies were identified among 7,215 child turns for speaking during 54 discussions from 18 discussion groups in 6 fourth-grade classrooms (N = 120; age…
Patterns of Analogical Reasoning among Beginning Readers
Farrington-Flint, Lee; Wood, Clare; Canobi, Katherine H.; Faulkner, Dorothy
2004-01-01
Despite compelling evidence that analogy skills are available to beginning readers, few studies have actually explored the possibility of identifying individual differences in young children's analogy skills in early reading. The present study examined individual differences in children's use of orthographic and phonological relations between…
ANALOGICAL REASONING USING TRANSFORMATIONS OF RULES
Haraguchi, Makoto; 原口, 誠
1986-01-01
A formalism of analogical reasoning is presented. The analogical reasoning can be considered as a deduction with a function of transforming logical rules. From this viewpoint, the reasoning is defined in terms of deduction, and is therefore realized in a logic programming system. The reasoning system is described as an extension of Prolog interpreter.
Analogies in high school Brazilian chemistry textbooks
Rosária Justi
2000-05-01
Full Text Available This paper presents and discusses an analysis of the analogies presented by Brazilian chemistry textbooks for the medium level. The main aim of the analysis is to discuss whether such analogies can be said good teaching models. From the results, some aspects concerning with teachers' role are discussed. Finally, some new research questions are emphasised.
Spectrometric analog-to-digital converter
Ormandzhiev, S.I.; Jordanov, V.T.
1988-01-01
Converter of digit-by-digit counterbalancing with slipping dial with number of channels equal to total number of states of the main digital-to-analog converter of digit-by-digit counterbalancing systems is presented. Algorithm for selection of digital-to-analog converters, which must be used by means of computer is suggested
An Analog Computer for Electronic Engineering Education
Fitch, A. L.; Iu, H. H. C.; Lu, D. D. C.
2011-01-01
This paper describes a compact analog computer and proposes its use in electronic engineering teaching laboratories to develop student understanding of applications in analog electronics, electronic components, engineering mathematics, control engineering, safe laboratory and workshop practices, circuit construction, testing, and maintenance. The…
PEMETAAN ANALOGI PADA KONSEP ABSTRAK FISIKA
Nyoto Suseno
2014-11-01
Full Text Available The research of any where founded majority students have common difficulties in abstract physics concept. The result of observation, lecturers have problem in teaching implementation of abstract concepts on physics learning. The objective of this research is to find out the ways how to overcome this problem. The research place of physics education programs and senior high school. The data are colected by quetionere, observation and interview. The lecturer behavior to making out this case is use of analogy to make concrete a abstract concept. This action is true, because the analogies are dynamic tools that facilitate understanding, rather than representations of the correct and static explanations. Using analogies not only promoted profound understanding of abstract concept, but also helped students overcome their misconceptions. However used analogy in teaching not yet planed with seriousness, analogy used spontanously with the result that less optimal. By planing and selecting right analogy, the role of analogy can be achieved the optimal result. Therefore, it is important to maping analogies of abstract consepts on physics learning.
Maximum attainable power density and wall load in tokamaks underlying reactor relevant constraints
Borrass, K.; Buende, R.
1979-09-01
The characteristic data of tokamaks optimized with respect to their power density or wall load are determined. Reactor relevant constraints are imposed, such as a fixed plant net power output, a fixed blanket thickness and the dependence of the maximum toroidal field on the geometry and conductor material. The impact of finite burn times is considered. Various scaling laws of the toroidal beta with the aspect ratio are discussed. (orig.) 891 GG/orig. 892 RDG [de
Computational approaches to analogical reasoning current trends
Richard, Gilles
2014-01-01
Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...
Structure problems in the analog computation
Braffort, P.L.
1957-01-01
The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)
Indigenous Fixed Nitrogen on Mars: Implications for Habitability
Stern, J. C.; Sutter, B.; Navarro-Gonzalez, R.; McKay, C. P.; Freissinet, C.; Archer, D., Jr.; Eigenbrode, J. L.; Mahaffy, P. R.; Conrad, P. G.
2015-12-01
Nitrate has been detected in Mars surface sediments and aeolian deposits by the Sample Analysis at Mars (SAM) instrument on the Mars Science Laboratory Curiosity rover (Stern et al., 2015). This detection is significant because fixed nitrogen is necessary for life, a requirement that drove the evolution of N-fixing metabolism in life on Earth. The question remains as to the extent to which a primitive N cycle ever developed on Mars, and whether N is currently being deposited on the martian surface at a non-negligible rate. It is also necessary to consider processes that could recycle oxidized N back into the atmosphere, and how these processes may have changed the soil inventory of N over time. The abundance of fixed nitrogen detected as NO from thermal decomposition of nitrate is consistent with both delivery of nitrate via impact generated thermal shock early in martian history and dry deposition from photochemistry of thermospheric NO, occurring in the present. Processes that could recycle N back into the atmosphere may include nitrate reduction by Fe(II) in aqueous environments on early Mars, impact decomposition, and/or UV photolysis. In order to better understand the history of nitrogen fixation on Mars, we look to cycling of N in Mars analog environments on Earth such as the Atacama Desert and the Dry Valleys of Antarctica. In particular, we examine the ratio of nitrate to perchlorate (NO3-/ClO4-) in these areas compared to those calculated from data acquired on Mars.
Generating explanations via analogical comparison.
Hoyos, Christian; Gentner, Dedre
2017-10-01
Generating explanations can be highly effective in promoting learning in both adults and children. Our interest is in the mechanisms that underlie this effect and in whether and how they operate in early learning. In adult reasoning, explanation may call on many subprocesses-including comparison, counterfactual reasoning, and reasoning by exclusion; but it is unlikely that all these processes are available to young children. We propose that one process that may serve both children and adults is comparison. In this study, we asked whether children would use the results of a comparison experience when asked to explain why a model skyscraper was stable. We focused on a challenging principle-that diagonal cross-bracing lends stability to physical structures (Gentner et al., Cognitive Science, 40, 224-240, 2016). Six-year-olds either received no training or interacted with model skyscrapers in one of three different conditions, designed to vary in their potential to invite and support comparison. In the Single Model condition, children interacted with a single braced model. In the comparison conditions (Low Alignability and High Alignability), children compared braced and unbraced models. Following experience with the models, children were asked to explain why the braced model was stable. They then received two transfer tasks. We found that children who received highly alignable pairs were most likely to (a) produce brace-based explanations and (b) transfer the brace principle to a dissimilar context. This provides evidence that children can benefit from analogical comparison in generating explanations and also suggests limitations on this ability.
Maximum entropy and Bayesian methods
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Adhesives for fixed orthodontic brackets.
Mandall, N A; Millett, D T; Mattick, C R; Hickman, J; Macfarlane, T V; Worthington, H V
2003-01-01
Bonding of orthodontic brackets to teeth is important to enable effective and efficient treatment with fixed appliances. The problem is bracket failure during treatment which increases operator chairside time and lengthens treatment time. A prolonged treatment is likely to increase the oral health risks of orthodontic treatment with fixed appliances one of which is irreversible enamel decalcification. To evaluate the effectiveness of different orthodontic adhesives for bonding. Electronic databases: the Cochrane Oral Health Group's Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE. Date of most recent searches: August 2002 (CENTRAL) (The Cochrane Library Issue 2, 2002). Trials were selected if they met the following criteria: randomised controlled trials (RCTs) and controlled clinical trials (CCTs) comparing two different adhesive groups. Participants were patients with fixed orthodontic appliances. The interventions were adhesives that bonded stainless steel brackets to all teeth except the molars. The primary outcome was debond or bracket failure. Data were recorded on decalcification as a secondary outcome, if present. Information regarding methods, participants, interventions, outcome measures and results were extracted in duplicate by pairs of reviewers (Nicky Mandall (NM) and Rye Mattick (CRM); Declan Millett (DTM) and Joy Hickman (JH2)). Since the data were not presented in a form that was amenable to meta-analysis, the results of the review are presented in narrative form only. Three trials satisfied the inclusion criteria. A chemical cured composite was compared with a light cure composite (one trial), a conventional glass ionomer cement (one trial) and a polyacid-modified resin composite (compomer) (one trial). The quality of the trial reports was generally poor. It is difficult to draw any conclusions from this review, however, suggestions are made for methods of improving future research involving
BRST gauge fixing and regularization
Damgaard, P.H.; Jonghe, F. de; Sollacher, R.
1995-05-01
In the presence of consistent regulators, the standard procedure of BRST gauge fixing (or moving from one gauge to another) can require non-trivial modifications. These modifications occur at the quantum level, and gauges exist which are only well-defined when quantum mechanical modifications are correctly taken into account. We illustrate how this phenomenon manifests itself in the solvable case of two-dimensional bosonization in the path-integral formalism. As a by-product, we show how to derive smooth bosonization in Batalin-Vilkovisky Lagrangian BRST quantization. (orig.)
Vestbo J
2012-09-01
Full Text Available Jørgen VestboUniversity of Manchester, Manchester, UKI read with interest the paper entitled "Diagnosis of airway obstruction in the elderly: contribution of the SARA study" by Sorino et al in a recent issue of this journal.1 Being involved in the Global Initiative for Obstructive Lung Diseases (GOLD, it is nice to see the interest sparked by the GOLD strategy document. However, in the paper by Sorino et al, there are a few misunderstandings around GOLD and the fixed ratio (forced expiratory volume in 1 second/forced volume vital capacity < 0.70 that need clarification.View original paper by Sorino and colleagues.
Frontopolar cortex mediates abstract integration in analogy.
Green, Adam E; Fugelsang, Jonathan A; Kraemer, David J M; Shamosh, Noah A; Dunbar, Kevin N
2006-06-22
Integration of abstractly similar relations during analogical reasoning was investigated using functional magnetic resonance imaging. Activation elicited by an analogical reasoning task that required both complex working memory and integration of abstractly similar relations was compared to activation elicited by a non-analogical task that required complex working memory in the absence of abstract relational integration. A left-sided region of the frontal pole of the brain (BA 9/10) was selectively active for the abstract relational integration component of analogical reasoning. Analogical reasoning also engaged a left-sided network of parieto-frontal regions. Activity in this network during analogical reasoning is hypothesized to reflect categorical alignment of individual component terms that make up analogies. This parieto-frontal network was also engaged by the complex control task, which involved explicit categorization, but not by a simpler control task, which did not involve categorization. We hypothesize that frontopolar cortex mediates abstract relational integration in complex reasoning while parieto-frontal regions mediate working memory processes, including manipulation of terms for the purpose of categorical alignment, that facilitate this integration.
Advances in Analog Circuit Design 2015
Baschirotto, Andrea; Harpe, Pieter
2016-01-01
This book is based on the 18 tutorials presented during the 24th workshop on Advances in Analog Circuit Design. Expert designers present readers with information about a variety of topics at the frontier of analog circuit design, including low-power and energy-efficient analog electronics, with specific contributions focusing on the design of efficient sensor interfaces and low-power RF systems. This book serves as a valuable reference to the state-of-the-art, for anyone involved in analog circuit research and development. · Provides a state-of-the-art reference in analog circuit design, written by experts from industry and academia; · Presents material in a tutorial-based format; · Includes coverage of high-performance analog-to-digital and digital to analog converters, integrated circuit design in scaled technologies, and time-domain signal processing.
The force of dissimilar analogies in bioethics.
Mertes, Heidi; Pennings, Guido
2011-04-01
Although analogical reasoning has long been a popular method of reasoning in bioethics, current literature does not sufficiently grasp its variety. We assert that the main shortcoming is the fact that an analogy's value is often judged on the extent of similarity between the source situation and the target situation, while in (bio)ethics, analogies are often used because of certain dissimilarities rather than in spite of them. We make a clear distinction between dissimilarities that aim to reinforce a similar approach in the source situation and the target situation and dissimilarities that aim to undermine or denounce a similar approach. The former kind of dissimilarity offers the analogy more normative force than if there were no dissimilarities present; this is often overlooked by authors who regard all relevant dissimilarities as detrimental to the analogy's strength. Another observation is that an evaluation of the normative force of an analogy cannot be made independently of moral principles or theories. Without these, one cannot select which elements in an analogy are morally relevant nor determine how they should be interpreted.
Algorithms of maximum likelihood data clustering with applications
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Maximum entropy principal for transportation
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Sugianto, Agus; Indriani, Andi Marini
2017-11-01
Platform construction GTS (Gathering Testing Sattelite) is offshore construction platform with fix pile structure type/fixed platform functioning to support the mining of petroleum exploitation. After construction fabrication process platform was moved to barges, then shipped to the installation site. Moving process is generally done by pull or push based on construction design determined when planning. But at the time of lifting equipment/cranes available in the work area then the moving process can be done by lifting so that moving activity can be implemented more quickly of work. This analysis moving process of GTS platform in a different way that is generally done to GTS platform types by lifting using problem is construction reinforcement required, so the construction can be moved by lifting with analyzing and checking structure working stress that occurs due to construction moving process by lifting AISC code standard and analysis using the SAP2000 structure analysis program. The analysis result showed that existing condition cannot be moved by lifting because stress ratio is above maximum allowable value that is 0.950 (AISC-ASD89). Overstress occurs on the member 295 and 324 with stress ratio value 0.97 and 0.95 so that it is required structural reinforcement. Box plate aplication at both members so that it produces stress ratio values 0.78 at the member 295 and stress ratio of 0.77 at the member 324. These results indicate that the construction have qualified structural reinforcement for being moved by lifting.
Can mushrooms fix atmospheric nitrogen?
Unknown
culation was maintained as a control. At maximum mycelial colonization by the ... cant increase in nitrogen concentration were observed in the inoculated cultures compared to the controls. The mycelial weight reduction could be .... ing of Belgian Administration for Development Corpora- tion (BADC) during that period were ...
Fixed Target Collisions at STAR
Meehan, Kathryn C.
2016-12-15
The RHIC Beam Energy Scan (BES) program was proposed to look for the turn-off of signatures of the quark gluon plasma (QGP), search for a possible QCD critical point, and study the nature of the phase transition between hadronic and partonic matter. Previous results have been used to claim that the onset of deconfinement occurs at a center-of-mass energy of 7 GeV. Data from lower energies are needed to test if this onset occurs. The goal of the STAR Fixed-Target Program is to extend the collision energy range in BES II to energies that are likely below the onset of deconfinement. Currently, STAR has inserted a gold target into the beam pipe and conducted test runs at center-of-mass energies of 3.9 and 4.5 GeV. Tests have been done with both Au and Al beams. First physics results from a Coulomb potential analysis of Au + Au fixed-target collisions are presented and are found to be consistent with results from previous experiments. Furthermore, the Coulomb potential, which is sensitive to the Z of the projectile and degree of baryonic stopping, will be compared to published results from the AGS.
Utilization of nitrogen fixing trees
Brewbaker, J.L.; Beldt, R. van den; MacDicken, K.; Budowski, G.; Kass, D.C.L.; Russo, R.O.; Escalante, G.; Herrera, R.; Aranguren, J.; Arkcoll, D.B.; Doebereinger, J. (cord.)
1983-01-01
Six papers from the symposium are noted. Brewbaker, J.L., Beldt, R. van den, MacDicken, K. Fuelwood uses and properties of nitrogen-fixing trees, pp 193-204, (Refs. 15). Includes a list of 35 nitrogen-fixing trees of high fuelwood value. Budowski, G.; Kass, D.C.L.; Russo, R.O. Leguminous trees for shade, pp 205-222, (Refs. 68). Escalante, G., Herrera, R., Aranguren, J.; Nitrogen fixation in shade trees (Erythrina poeppigiana) in cocoa plantations in northern Venezuela, pp 223-230, (Refs. 13). Arkcoll, D.B.; Some leguminous trees providing useful fruits in the North of Brazil, pp 235-239, (Refs. 13). This paper deals with Parkia platycephala, Pentaclethra macroloba, Swartzia sp., Cassia leiandra, Hymenaea courbaril, dipteryz odorata, Inga edulis, I. macrophylla, and I. cinnamonea. Baggio, A.J.; Possibilities of the use of Gliricidia sepium in agroforestry systems in Brazil, pp 241-243; (Refs. 15). Seiffert, N.F.; Biological nitrogen and protein production of Leucaena cultivars grown to supplement the nutrition of ruminants, pp 245-249, (Refs. 14). Leucaena leucocephala cv. Peru, L. campina grande (L. leucocephala), and L. cunningham (L. leucocephalae) were promising for use as browse by beef cattle in central Brazil.
Fixed-Target Electron Accelerators
Brooks, William K.
2001-01-01
A tremendous amount of scientific insight has been garnered over the past half-century by using particle accelerators to study physical systems of sub-atomic dimensions. These giant instruments begin with particles at rest, then greatly increase their energy of motion, forming a narrow trajectory or beam of particles. In fixed-target accelerators, the particle beam impacts upon a stationary sample or target which contains or produces the sub-atomic system being studied. This is in distinction to colliders, where two beams are produced and are steered into each other so that their constituent particles can collide. The acceleration process always relies on the particle being accelerated having an electric charge; however, both the details of producing the beam and the classes of scientific investigations possible vary widely with the specific type of particle being accelerated. This article discusses fixed-target accelerators which produce beams of electrons, the lightest charged particle. As detailed in the report, the beam energy has a close connection with the size of the physical system studied. Here a useful unit of energy is a GeV, i.e., a giga electron-volt. (ne GeV, the energy an electron would have if accelerated through a billion volts, is equal to 1.6 x 10 -10 joules.) To study systems on a distance scale much smaller than an atomic nucleus requires beam energies ranging from a few GeV up to hundreds of GeV and more
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Improved Landau gauge fixing and discretisation errors
Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.
2000-01-01
Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition
Anaerobic treatment of winery wastewater in fixed bed reactors.
Ganesh, Rangaraj; Rajinikanth, Rajagopal; Thanikal, Joseph V; Ramanujam, Ramamoorty Alwar; Torrijos, Michel
2010-06-01
The treatment of winery wastewater in three upflow anaerobic fixed-bed reactors (S9, S30 and S40) with low density floating supports of varying size and specific surface area was investigated. A maximum OLR of 42 g/l day with 80 +/- 0.5% removal efficiency was attained in S9, which had supports with the highest specific surface area. It was found that the efficiency of the reactors increased with decrease in size and increase in specific surface area of the support media. Total biomass accumulation in the reactors was also found to vary as a function of specific surface area and size of the support medium. The Stover-Kincannon kinetic model predicted satisfactorily the performance of the reactors. The maximum removal rate constant (U(max)) was 161.3, 99.0 and 77.5 g/l day and the saturation value constant (K(B)) was 162.0, 99.5 and 78.0 g/l day for S9, S30 and S40, respectively. Due to their higher biomass retention potential, the supports used in this study offer great promise as media in anaerobic fixed bed reactors. Anaerobic fixed-bed reactors with these supports can be applied as high-rate systems for the treatment of large volumes of wastewaters typically containing readily biodegradable organics, such as the winery wastewater.
Selective termination, fetal reduction and analogical reasoning.
Pennings, G
2013-06-01
Analogical reasoning is a basic method in bioethics. Its main purpose is to transfer the rule from an existing or known situation to a new and problematic situation. This commentary applies the lifeboat analogy to the context of selective termination and fetal reduction. It turns out that the analogy is only partially helpful as the main principle in the case of selective termination is the procreative beneficence principle. However, the wide person-affecting form of this principle doubly justifies selective termination: i.e. one prevents the harm caused by the birth of an affected child and one increases the life chances of the remaining fetuses. I conclude, however, that all analogies are basically flawed since they assume that fetuses as such have interests. I argue that fetuses only have interests to the extent that they are potential future persons. Copyright © 2013 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
An Electrical Analog Computer for Poets
Bruels, Mark C.
1972-01-01
Nonphysics majors are presented with a direct current experiment beyond Ohms law and series and parallel laws. This involves construction of an analog computer from common rheostats and student-assembled voltmeters. (Author/TS)
Pentagastrin analogs containing α-aminooxy acids
Balaspiri, L.; Kovacs, L.; Kovacs, K.; Varga, L.; Varro, V.; Schoen, I.; Kisfaludy, L.
1982-01-01
Two 14 C-labelled pentagastrin analogs of different specific radioactivities, containing α-aminooxy acids, have been synthesised to study their biological effects in the gastro-intestinal tract. (U.K.)
Quantum States Transfer by Analogous Bell States
Mei Di; Li Chong; Yang Guohui; Song Heshan
2008-01-01
Transmitting quantum states by channels of analogous Bell states is studied in this paper. We analyze the transmitting process, constructed the probabilitic unitary operator, and gain the largest successful transfer quantum state probability.
Modelling information flow along the human connectome using maximum flow.
Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung
2018-01-01
The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hu, Shun-Wei; Chen, Shushi
2017-01-01
The large-scale simultaneous extraction and concentration of aqueous solutions of triazine analogs, and aflatoxins, through a hydrocarbon-based membrane (e.g., polyethylene, polyethylene/polypropylene copolymer) under ambient temperature and atmospheric pressure is reported. The subsequent adsorption of analyte in the extraction chamber over the lignin-modified silica gel facilitates the process by reducing the operating time. The maximum adsorption capacity values for triazine analogs and af...
Adhesives for fixed orthodontic brackets.
Mandall, Nicky A; Hickman, Joy; Macfarlane, Tatiana V; Mattick, Rye Cr; Millett, Declan T; Worthington, Helen V
2018-04-09
Bonding of orthodontic brackets to teeth is important to enable effective and efficient treatment with fixed appliances. The problem is bracket failure during treatment which increases operator chairside time and lengthens treatment time. A prolonged treatment is likely to increase the oral health risks of orthodontic treatment with fixed appliances one of which is irreversible enamel decalcification. This is an update of the Cochrane Review first published in 2003. A new full search was conducted on 26 September 2017 but no new studies were identified. We have only updated the search methods section in this new version. The conclusions of this Cochrane Review remain the same. To evaluate the effects of different orthodontic adhesives for bonding. Cochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 26 September 2017), the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 8) in the Cochrane Library (searched 26 September 2017), MEDLINE Ovid (1946 to 26 September 2017), and Embase Ovid (1980 to 26 September 2017). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. Trials were selected if they met the following criteria: randomised controlled trials (RCTs) and controlled clinical trials (CCTs) comparing two different adhesive groups. Participants were patients with fixed orthodontic appliances. The interventions were adhesives that bonded stainless steel brackets to all teeth except the molars. The primary outcome was debond or bracket failure. Data were recorded on decalcification as a secondary outcome, if present. Information regarding methods, participants, interventions, outcome measures and results were extracted in
High-frequency analog integrated circuit design
1995-01-01
To learn more about designing analog integrated circuits (ICs) at microwave frequencies using GaAs materials, turn to this text and reference. It addresses GaAs MESFET-based IC processing. Describes the newfound ability to apply silicon analog design techniques to reliable GaAs materials and devices which, until now, was only available through technical papers scattered throughout hundred of articles in dozens of professional journals.
Emergent Explorations: Analog and Digital Scripting
Worden, Alexander
2011-01-01
This book documents an exploration of emergent and linear modes of defining space, form, and structure. The thesis highlights a dialog between analog and digital modeling techniques, in concept and project development. It identifies that analog modeling techniques, coupled with judgment, can be used to develop complex forms. The thesis project employs critical judgment and the textile techniques of crochet as a vehicle generate form. Crochet lends itself to this investigation because it ...
An analog integrated circuit design laboratory
Mondragon-Torres, A.F.; Mayhugh, Jr.; Pineda de Gyvez, J.; Silva-Martinez, J.; Sanchez-Sinencio, E.
2003-01-01
We present the structure of an analog integrated circuit design laboratory to instruct at both, senior undergraduate and entry graduate levels. The teaching material includes: a laboratory manual with analog circuit design theory, pre-laboratory exercises and circuit design specifications; a reference web page with step by step instructions and examples; the use of mathematical tools for automation and analysis; and state of the art CAD design tools in use by industry. Upon completion of the ...
Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space
Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf
2012-01-01
(n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ time algorithm to d...... time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane....
Children's analogical reasoning about natural phenomena.
Pauen, S; Wilkening, F
1997-10-01
This report investigates children's analogical reasoning in a physics task, using an analogy generated by the children rather than by the experimenter. A total of 127 elementary school children took part in three related studies. Children learned to predict the behavior of a balance scale. Later, they were asked to solve a force interaction problem. Two versions of the balance scale training were devised: version A suggested an incorrect solution to the target problem (negative analogy), and version B suggested a correct solution to the target problem (positive analogy). In Study 1, 9- to 10-year-olds showed spontaneous transfer in both training conditions. In Study 2, 7-year-olds did not show any transfer in the positive analogy condition. Study 3 revealed that the lack of transfer in younger children was not due to a failure either to notice the analogy or to perform the mapping. Instead, 7-year-olds transferred only selected aspects of the correct solution. Copyright 1997 Academic Press.
Working memory predicts children's analogical reasoning.
Simms, Nina K; Frausel, Rebecca R; Richland, Lindsey E
2018-02-01
Analogical reasoning is the cognitive skill of drawing relationships between representations, often between prior knowledge and new representations, that allows for bootstrapping cognitive and language development. Analogical reasoning proficiency develops substantially during childhood, although the mechanisms underlying this development have been debated, with developing cognitive resources as one proposed mechanism. We explored the role of executive function (EF) in supporting children's analogical reasoning development, with the goal of determining whether predicted aspects of EF were related to analogical development at the level of individual differences. We assessed 5- to 11-year-old children's working memory, inhibitory control, and cognitive flexibility using measures from the National Institutes of Health Toolbox Cognition battery. Individual differences in children's working memory best predicted performance on an analogical mapping task, even when controlling for age, suggesting a fundamental interrelationship between analogical reasoning and working memory development. These findings underscore the need to consider cognitive capacities in comprehensive theories of children's reasoning development. Copyright © 2017 Elsevier Inc. All rights reserved.
29 CFR 778.322 - Reducing the fixed workweek for which a salary is paid.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Reducing the fixed workweek for which a salary is paid. 778... workweek for which a salary is paid. If an employee whose maximum hours standard is 40 hours was hired at a salary of $200 for a fixed workweek of 40 hours, his regular rate at the time of hiring was $5 per hour...
Two-dimensional maximum entropy image restoration
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Maximum one-shot dissipated work from Rényi divergences
Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Anti-Plasmodium activity of ceramide analogs
Gatt Shimon
2004-12-01
Full Text Available Abstract Background Sphingolipids are key molecules regulating many essential functions in eukaryotic cells and ceramide plays a central role in sphingolipid metabolism. A sphingolipid metabolism occurs in the intraerythrocytic stages of Plasmodium falciparum and is associated with essential biological processes. It constitutes an attractive and potential target for the development of new antimalarial drugs. Methods The anti-Plasmodium activity of a series of ceramide analogs containing different linkages (amide, methylene or thiourea linkages between the fatty acid part of ceramide and the sphingoid core was investigated in culture and compared to the sphingolipid analog PPMP (d,1-threo-1-phenyl-2-palmitoylamino-3-morpholino-1-propanol. This analog is known to inhibit the parasite sphingomyelin synthase activity and block parasite development by preventing the formation of the tubovesicular network that extends from the parasitophorous vacuole to the red cell membrane and delivers essential extracellular nutrients to the parasite. Results Analogs containing methylene linkage showed a considerably higher anti-Plasmodium activity (IC50 in the low nanomolar range than PPMP and their counterparts with a natural amide linkage (IC50 in the micromolar range. The methylene analogs blocked irreversibly P. falciparum development leading to parasite eradication in contrast to PPMP whose effect is cytostatic. A high sensitivity of action towards the parasite was observed when compared to their effect on the human MRC-5 cell growth. The toxicity towards parasites did not correlate with the inhibition by methylene analogs of the parasite sphingomyelin synthase activity and the tubovesicular network formation, indicating that this enzyme is not their primary target. Conclusions It has been shown that ceramide analogs were potent inhibitors of P. falciparum growth in culture. Interestingly, the nature of the linkage between the fatty acid part and the
The future of vitamin D analogs
Carlien eLeyssens
2014-04-01
Full Text Available The active form of vitamin D3, 1,25-dihydroxyvitamin D3, is a major regulator of bone and calcium homeostasis. In addition, this hormone also inhibits the proliferation and stimulates the differentiation of normal as well as malignant cells. Supraphysiological doses of 1,25-dihydroxyvitamin D3 are required to reduce cancer cell proliferation. However, these doses will lead in vivo to calcemic side effects such as hypercalcemia and hypercalciuria. During the last 25 years, many structural analogs of 1,25-dihydroxyvitamin D3 have been synthesized by the introduction of chemical modifications in the A-ring, central CD-ring region or side chain of 1,25-dihydroxyvitamin D3 in the hope to find molecules with a clear dissociation between the beneficial antiproliferative effects and adverse calcemic side effects. One example of such an analog with a good dissociation ratio is calcipotriol (DaivonexR, which is clinically used to treat the hyperproliferative skin disease psoriasis. Other vitamin D analogs were clinically approved for the treatment of osteoporosis or secondary hyperparathyroidism. No vitamin D analog is currently used in the clinic for the treatment of cancer although several analogs have been shown to be potent drugs in animal models of cancer. Omics studies as well as in vitro cell biological experiments unraveled basic mechanisms involved in the antineoplastic effects of vitamin D and its analogs. 1,25-dihydroxyvitamin D3 and analogs act in a cell type- and tissue-specific manner. Moreover, a blockade in the transition of the G0/1 towards S phase of the cell cycle, induction of apoptosis, inhibition of migration and invasion of tumor cells together with effects on angiogenesis and inflammation have been implicated in the pleiotropic effects of 1,25-dihydroxyvitamin D3 and its analogs. In this review we will give an overview of the action of vitamin D analogs in tumor cells and look forward how these compounds could be introduced in the
In situ measurement of fixed charge evolution at silicon surfaces during atomic layer deposition
Ju, Ling; Watt, Morgan R.; Strandwitz, Nicholas C.
2015-01-01
Interfacial fixed charge or interfacial dipoles are present at many semiconductor-dielectric interfaces and have important effects upon device behavior, yet the chemical origins of these electrostatic phenomena are not fully understood. We report the measurement of changes in Si channel conduction in situ during atomic layer deposition (ALD) of aluminum oxide using trimethylaluminum and water to probe changes in surface electrostatics. Current-voltage data were acquired continually before, during, and after the self-limiting chemical reactions that result in film growth. Our measurements indicated an increase in conductance on p-type samples with p + ohmic contacts and a decrease in conductance on analogous n-type samples. Further, p + contacted samples with n-type channels exhibited an increase in measured current and n + contacted p-type samples exhibited a decrease in current under applied voltage. Device physics simulations, where a fixed surface charge was parameterized on the channel surface, connect the surface charge to changes in current-voltage behavior. The simulations and analogous analytical relationships for near-surface conductance were used to explain the experimental results. Specifically, the changes in current-voltage behavior can be attributed to the formation of a fixed negative charge or the modification of a surface dipole upon chemisorption of trimethylaluminum. These measurements allow for the observation of fixed charge or dipole formation during ALD and provide further insight into the electrostatic behavior at semiconductor-dielectric interfaces during film nucleation
NaturAnalogs for the Unsaturated Zone
A. Simmons; A. Unger; M. Murrell
2000-03-08
The purpose of this Analysis/Model Report (AMR) is to document natural and anthropogenic (human-induced) analog sites and processes that are applicable to flow and transport processes expected to occur at the potential Yucca Mountain repository in order to build increased confidence in modeling processes of Unsaturated Zone (UZ) flow and transport. This AMR was prepared in accordance with ''AMR Development Plan for U0135, Natural Analogs for the UZ'' (CRWMS 1999a). Knowledge from analog sites and processes is used as corroborating information to test and build confidence in flow and transport models of Yucca Mountain, Nevada. This AMR supports the Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR) and the Yucca Mountain Site Description. The objectives of this AMR are to test and build confidence in the representation of UZ processes in numerical models utilized in the UZ Flow and Transport Model. This is accomplished by: (1) applying data from Boxy Canyon, Idaho in simulations of UZ flow using the same methodologies incorporated in the Yucca Mountain UZ Flow and Transport Model to assess the fracture-matrix interaction conceptual model; (2) Providing a preliminary basis for analysis of radionuclide transport at Pena Blanca, Mexico as an analog of radionuclide transport at Yucca Mountain; and (3) Synthesizing existing information from natural analog studies to provide corroborating evidence for representation of ambient and thermally coupled UZ flow and transport processes in the UZ Model.
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Analogy, higher order thinking, and education.
Richland, Lindsey Engle; Simms, Nina
2015-01-01
Analogical reasoning, the ability to understand phenomena as systems of structured relationships that can be aligned, compared, and mapped together, plays a fundamental role in the technology rich, increasingly globalized educational climate of the 21st century. Flexible, conceptual thinking is prioritized in this view of education, and schools are emphasizing 'higher order thinking', rather than memorization of a cannon of key topics. The lack of a cognitively grounded definition for higher order thinking, however, has led to a field of research and practice with little coherence across domains or connection to the large body of cognitive science research on thinking. We review literature on analogy and disciplinary higher order thinking to propose that relational reasoning can be productively considered the cognitive underpinning of higher order thinking. We highlight the utility of this framework for developing insights into practice through a review of mathematics, science, and history educational contexts. In these disciplines, analogy is essential to developing expert-like disciplinary knowledge in which concepts are understood to be systems of relationships that can be connected and flexibly manipulated. At the same time, analogies in education require explicit support to ensure that learners notice the relevance of relational thinking, have adequate processing resources available to mentally hold and manipulate relations, and are able to recognize both the similarities and differences when drawing analogies between systems of relationships. © 2015 John Wiley & Sons, Ltd.
An analog silicon retina with multichip configuration.
Kameda, Seiji; Yagi, Tetsuya
2006-01-01
The neuromorphic silicon retina is a novel analog very large scale integrated circuit that emulates the structure and the function of the retinal neuronal circuit. We fabricated a neuromorphic silicon retina, in which sample/hold circuits were embedded to generate fluctuation-suppressed outputs in the previous study [1]. The applications of this silicon retina, however, are limited because of a low spatial resolution and computational variability. In this paper, we have fabricated a multichip silicon retina in which the functional network circuits are divided into two chips: the photoreceptor network chip (P chip) and the horizontal cell network chip (H chip). The output images of the P chip are transferred to the H chip with analog voltages through the line-parallel transfer bus. The sample/hold circuits embedded in the P and H chips compensate for the pattern noise generated on the circuits, including the analog communication pathway. Using the multichip silicon retina together with an off-chip differential amplifier, spatial filtering of the image with an odd- and an even-symmetric orientation selective receptive fields was carried out in real time. The analog data transfer method in the present multichip silicon retina is useful to design analog neuromorphic multichip systems that mimic the hierarchical structure of neuronal networks in the visual system.
A fast multichannel analog storage system
Freytag, D.R.
1983-01-01
A Multichannel Analog Storage System based on a commercial 32-channel parallel in/serial out (PISO) analog shift register is described. The basic unit is a single width CAMAC module containing 512 analog cells and the associated logic for data storage and subsequent readout. At sampling rates of up to 30 MHz the signals are strobed directly into the PISO. At higher rates signals are strobed into a fast presampling stage and subsequently transferred in block form into an array of PISO's. Sampling rates of 300 MHz have been achieved with the present device and 1000 MHz are possible with improved signal drivers. The system is well suited for simultaneous handling of many signal channels with moderate numbers of samples in each channel. RMS noise over full scale signal has been measured as 1:3000 (approx. =11 bit). However, nonlinearities in the response and differences in sensitivity of the analog cells require an elaborate calibration system in order to realize 11 bit accuracy for the analog information
Neurotoxic Alkaloids: Saxitoxin and Its Analogs
Troco K. Mihali
2010-07-01
Full Text Available Saxitoxin (STX and its 57 analogs are a broad group of natural neurotoxic alkaloids, commonly known as the paralytic shellfish toxins (PSTs. PSTs are the causative agents of paralytic shellfish poisoning (PSP and are mostly associated with marine dinoflagellates (eukaryotes and freshwater cyanobacteria (prokaryotes, which form extensive blooms around the world. PST producing dinoflagellates belong to the genera Alexandrium, Gymnodinium and Pyrodinium whilst production has been identified in several cyanobacterial genera including Anabaena, Cylindrospermopsis, Aphanizomenon Planktothrix and Lyngbya. STX and its analogs can be structurally classified into several classes such as non-sulfated, mono-sulfated, di-sulfated, decarbamoylated and the recently discovered hydrophobic analogs—each with varying levels of toxicity. Biotransformation of the PSTs into other PST analogs has been identified within marine invertebrates, humans and bacteria. An improved understanding of PST transformation into less toxic analogs and degradation, both chemically or enzymatically, will be important for the development of methods for the detoxification of contaminated water supplies and of shellfish destined for consumption. Some PSTs also have demonstrated pharmaceutical potential as a long-term anesthetic in the treatment of anal fissures and for chronic tension-type headache. The recent elucidation of the saxitoxin biosynthetic gene cluster in cyanobacteria and the identification of new PST analogs will present opportunities to further explore the pharmaceutical potential of these intriguing alkaloids.
Natural Analogs for the Unsaturated Zone
Simmons, A.; Unger, A.; Murrell, M.
2000-01-01
The purpose of this Analysis/Model Report (AMR) is to document natural and anthropogenic (human-induced) analog sites and processes that are applicable to flow and transport processes expected to occur at the potential Yucca Mountain repository in order to build increased confidence in modeling processes of Unsaturated Zone (UZ) flow and transport. This AMR was prepared in accordance with ''AMR Development Plan for U0135, Natural Analogs for the UZ'' (CRWMS 1999a). Knowledge from analog sites and processes is used as corroborating information to test and build confidence in flow and transport models of Yucca Mountain, Nevada. This AMR supports the Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR) and the Yucca Mountain Site Description. The objectives of this AMR are to test and build confidence in the representation of UZ processes in numerical models utilized in the UZ Flow and Transport Model. This is accomplished by: (1) applying data from Boxy Canyon, Idaho in simulations of UZ flow using the same methodologies incorporated in the Yucca Mountain UZ Flow and Transport Model to assess the fracture-matrix interaction conceptual model; (2) Providing a preliminary basis for analysis of radionuclide transport at Pena Blanca, Mexico as an analog of radionuclide transport at Yucca Mountain; and (3) Synthesizing existing information from natural analog studies to provide corroborating evidence for representation of ambient and thermally coupled UZ flow and transport processes in the UZ Model
Fixed telephony evolution at CERN
CERN. Geneva
2015-01-01
The heart of CERN’s telephony infrastructure consists of the Alcatel IP-PBX that links CERN’s fixed line phones, Lync softphones and CERN’s GSM subscribers to low-cost local and international telephony services. The PABX infrastructure also supports the emergency “red telephones” in the LHC tunnel and provides vital services for the Fire and Rescue Service and the CERN Control Centre. Although still reliable, the Alcatel hardware is increasingly costly to maintain and looking increasingly outmoded in a market where open source solutions are increasingly dominant. After presenting an overview of the Alcatel PABX and the services it provides, including innovative solutions such as the Closed User Group for our mobile telephony services, we present a possible architecture for a software based system designed to meet tomorrow’s communication needs and describe how the introduction of open-source call routers based on the SIP protocol and Session Border Controllers (SBC) could foster the introduction...
Fixed type incore measuring device
Oda, Naotaka; Ito, Hitoshi; Maeda, Hiroyuki
1998-01-01
The present invention concerns a measuring device using gamma thermometers to be used in a BWR type reactor. An input switch is inserted to the vicinity of a detection signal input portion of a signal cable connecting GT with the detection signal input portion of a fixed type incore measuring device, and a loop resistance measuring means is disposed to the input switch on the side of the GT by way of a measurement switch. Upon measuring loop resistance, the GT measuring circuit is switched from the detection signal input portion to the loop resistance measuring means by a switching operation of the input switch and the measurement switch thereby enabling to confirm the value of the loop resistance. In addition, the lowering of the voltage in the loop resistance is compensated to confirm the accurate measurement values to be used thereby enabling to measure GT detection signals accurately. A diagnosing means for diagnosing the state of GT based on the results of the measurement for the loop resistance is disposed, and the results are reported to an operator. (N.H.)
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Students' Pre- and Post-Teaching Analogical Reasoning when They Draw Their Analogies
Mozzer, Nilmara Braga; Justi, Rosaria
2012-01-01
Analogies are parts of human thought. From them, we can acquire new knowledge or change that which already exists in our cognitive structure. In this sense, understanding the analogical reasoning process becomes an essential condition to understand how we learn. Despite the importance of such an understanding, there is no general agreement in…
Tyurenkov, I N; Borodkina, L E; Bagmetova, V V; Berestovitskaya, V M; Vasil'eva, O S
2016-02-01
GABA analogs containing phenyl (phenibut) or para-chlorophenyl (baclofen) substituents demonstrated nootropic activity in a dose of 20 mg/kg: they improved passive avoidance conditioning, decelerated its natural extinction, and exerted antiamnestic effect on the models of amnesia provoked by scopolamine or electroshock. Tolyl-containing GABA analog (tolibut, 20 mg/kg) exhibited antiamnestic activity only on the model of electroshock-induced amnesia. Baclofen and, to a lesser extent, tolibut alleviated seizures provoked by electroshock, i.e. both agents exerted anticonvulsant effect. All examined GABA aryl derivatives demonstrated neuroprotective properties on the maximum electroshock model: they shortened the duration of coma and shortened the period of spontaneous motor activity recovery. In addition, these agents decreased the severity of passive avoidance amnesia and behavioral deficit in the open field test in rats exposed to electroshock. The greatest neuroprotective properties were exhibited by phenyl-containing GABA analog phenibut.
The complete digital workflow in fixed prosthodontics: a systematic review.
Joda, Tim; Zarone, Fernando; Ferrari, Marco
2017-09-19
The continuous development in dental processing ensures new opportunities in the field of fixed prosthodontics in a complete virtual environment without any physical model situations. The aim was to compare fully digitalized workflows to conventional and/or mixed analog-digital workflows for the treatment with tooth-borne or implant-supported fixed reconstructions. A PICO strategy was executed using an electronic (MEDLINE, EMBASE, Google Scholar) plus manual search up to 2016-09-16 focusing on RCTs investigating complete digital workflows in fixed prosthodontics with regard to economics or esthetics or patient-centered outcomes with or without follow-up or survival/success rate analysis as well as complication assessment of at least 1 year under function. The search strategy was assembled from MeSH-Terms and unspecific free-text words: {(("Dental Prosthesis" [MeSH]) OR ("Crowns" [MeSH]) OR ("Dental Prosthesis, Implant-Supported" [MeSH])) OR ((crown) OR (fixed dental prosthesis) OR (fixed reconstruction) OR (dental bridge) OR (implant crown) OR (implant prosthesis) OR (implant restoration) OR (implant reconstruction))} AND {("Computer-Aided Design" [MeSH]) OR ((digital workflow) OR (digital technology) OR (computerized dentistry) OR (intraoral scan) OR (digital impression) OR (scanbody) OR (virtual design) OR (digital design) OR (cad/cam) OR (rapid prototyping) OR (monolithic) OR (full-contour))} AND {("Dental Technology" [MeSH) OR ((conventional workflow) OR (lost-wax-technique) OR (porcelain-fused-to-metal) OR (PFM) OR (implant impression) OR (hand-layering) OR (veneering) OR (framework))} AND {(("Study, Feasibility" [MeSH]) OR ("Survival" [MeSH]) OR ("Success" [MeSH]) OR ("Economics" [MeSH]) OR ("Costs, Cost Analysis" [MeSH]) OR ("Esthetics, Dental" [MeSH]) OR ("Patient Satisfaction" [MeSH])) OR ((feasibility) OR (efficiency) OR (patient-centered outcome))}. Assessment of risk of bias in selected studies was done at a 'trial level' including random sequence
Common fixed points for weakly compatible maps
Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45
In 1976, Jungck [4] proved a common fixed point theorem for commuting maps generalizing the Banach's fixed point theorem, which states that, 'let (X, d) be a complete metric space. If T satisfies d(Tx,Ty) ≤ kd(x,y) for each x,y ∈ X where 0 ≤ k < 1, then T has a unique fixed point in X'. This theorem has many applications, ...
Infra-red fixed points in supersymmetry
¾c /font>, and c stands for the color quadratic Casimir of the field. Fixed points arise when R* ¼ or when R*. /nobr>. ´S-½. µ ´r ·b¿µ. The stability of the solutions may be tested by linearizing the system about the fixed points. For the non-trivial fixed points we need to consider the eigenvalues of the stability matrix whose ...
Changes in analogical reasoning in adulthood.
Clark, E; Gardner, M K; Brown, G; Howell, R J
1990-01-01
This study sought to investigate adult intellectual development through an analysis of a particular type of cognitive ability, verbal analogical reasoning. The performance of 60 individuals between the ages of 20 and 79 was compared on 100 verbal analogies. The subjects consisted of six groups of ten individuals each (five males and five females), matched as a group for education and gender. Solution times and error rates served as the dependent measures. Results showed that there was a significant trend for the older subjects (60- and 70-year-olds) to be slower than the young subjects (20-, 30-, 40-, and 50-year-olds), but not necessarily more error prone. These data suggest that verbal analogical reasoning changes with age. Supplemental data demonstrated a change in other abilities as well (i.e., decline in perceptual-motor speed and spatial skill).
Electronic devices for analog signal processing
Rybin, Yu K
2012-01-01
Electronic Devices for Analog Signal Processing is intended for engineers and post graduates and considers electronic devices applied to process analog signals in instrument making, automation, measurements, and other branches of technology. They perform various transformations of electrical signals: scaling, integration, logarithming, etc. The need in their deeper study is caused, on the one hand, by the extension of the forms of the input signal and increasing accuracy and performance of such devices, and on the other hand, new devices constantly emerge and are already widely used in practice, but no information about them are written in books on electronics. The basic approach of presenting the material in Electronic Devices for Analog Signal Processing can be formulated as follows: the study with help from self-education. While divided into seven chapters, each chapter contains theoretical material, examples of practical problems, questions and tests. The most difficult questions are marked by a diamon...
Electrical analog of a Josephson junction
Goldman, A.M.
1979-01-01
It is noted that a mathematical description of the phase-coupling of two oscillators synchronized by a phase-lock-loop under the influence of thermal white noise is analogous to that of the phase coupling of two superconductors in a Josephson junction also under the influence of noise. This analogy may be useful in studying threshold instabilities of the Josephson junction in regimes not restricted to the case of large damping. This is of interest because the behavior of the mean voltage near the threshold current can be characterized by critical exponents which resemble those exhibited by an order parameter of a continuous phase transition. As it is possible to couple a collection of oscillators together in a chain, the oscillator analogy may also be useful in exploring the dynamics and statistical mechanics of coupled junctions
On Lovelock analogs of the Riemann tensor
Camanho, Xián O.; Dadhich, Naresh
2016-03-01
It is possible to define an analog of the Riemann tensor for Nth order Lovelock gravity, its characterizing property being that the trace of its Bianchi derivative yields the corresponding analog of the Einstein tensor. Interestingly there exist two parallel but distinct such analogs and the main purpose of this note is to reconcile both formulations. In addition we will introduce a simple tensor identity and use it to show that any pure Lovelock vacuum in odd d=2N+1 dimensions is Lovelock flat, i.e. any vacuum solution of the theory has vanishing Lovelock-Riemann tensor. Further, in the presence of cosmological constant it is the Lovelock-Weyl tensor that vanishes.
Analogical reasoning for reliability analysis based on generic data
Kozin, Igor O
1996-10-01
The paper suggests using the systemic concept 'analogy' for the foundation of an approach to analyze system reliability on the basis of generic data, describing the method of structuring the set that defines analogical models, an approach of transition from the analogical model to a reliability model and a way of obtaining reliability intervals of analogous objects.
Analogical reasoning for reliability analysis based on generic data
Kozin, Igor O.
1996-01-01
The paper suggests using the systemic concept 'analogy' for the foundation of an approach to analyze system reliability on the basis of generic data, describing the method of structuring the set that defines analogical models, an approach of transition from the analogical model to a reliability model and a way of obtaining reliability intervals of analogous objects
Analogy-Enhanced Instruction: Effects on Reasoning Skills in Science
Remigio, Krisette B.; Yangco, Rosanelia T.; Espinosa, Allen A.
2014-01-01
The study examined the reasoning skills of first year high school students after learning general science concepts through analogies. Two intact heterogeneous sections were randomly assigned to Analogy-Enhanced Instruction (AEI) group and Non Analogy-Enhanced (NAEI) group. Various analogies were incorporated in the lessons of the AEI group for…
Perceptions of Rebuttal Analogy: Politeness and Implications for Persuasion.
Whaley, Bryan B.
1997-01-01
States that recent theorizing about the role of analogy in persuasion suggests that "rebuttal" analogy addresses two communicative functions by serving as argument and a method of social attack. Examines message receivers' perceptions of rebuttal analogy and rebuttal analogy users. Finds that participants perceived the communicator using…
Fixed point theorems in spaces and -trees
Kirk WA
2004-01-01
Full Text Available We show that if is a bounded open set in a complete space , and if is nonexpansive, then always has a fixed point if there exists such that for all . It is also shown that if is a geodesically bounded closed convex subset of a complete -tree with , and if is a continuous mapping for which for some and all , then has a fixed point. It is also noted that a geodesically bounded complete -tree has the fixed point property for continuous mappings. These latter results are used to obtain variants of the classical fixed edge theorem in graph theory.
Words, Concepts, and the Geometry of Analogy
Stephen McGregor
2016-08-01
Full Text Available This paper presents a geometric approach to the problem of modelling the relationship between words and concepts, focusing in particular on analogical phenomena in language and cognition. Grounded in recent theories regarding geometric conceptual spaces, we begin with an analysis of existing static distributional semantic models and move on to an exploration of a dynamic approach to using high dimensional spaces of word meaning to project subspaces where analogies can potentially be solved in an online, contextualised way. The crucial element of this analysis is the positioning of statistics in a geometric environment replete with opportunities for interpretation.
Analogical reasoning abilities of recovering alcoholics.
Gardner, M K; Clark, E; Bowman, M A; Miller, P J
1989-08-01
This study investigated analogical reasoning abilities of alcoholics who had been abstinent from alcohol for at least 1 year. Their performance was compared to that of nonalcoholic controls matched as a group for education, age, and gender. Solution times and error rates were modeled using a regression model. Results showed a nonsignificant trend for alcoholics to be faster, but more error prone, than controls. The same componential model applied to both groups, and fit them equally well. Although differences have been found in analogical reasoning ability between controls and alcoholics immediately following detoxification, we find no evidence of differences after extended periods of sobriety.
Analogy in systems management: a theoretical inquiry
Silverman, B.G.
1983-11-01
This theoretical analysis of the intuitive and diffuse characteristics of analogical reasoning processes is the first step in a research effort intended to lead to: understanding of common (and possibly costly) errors, pitfalls, travails, and problem-solving impediments; possible recommendations for improvements to organizational structures, control and coordination processes, and management information flows, and guidelines for a generalized analogical reasoning support framework (e.g., a handbook, a knowledge bank design, and/or even a software package/artificial intelligence program). 233 references.
Analysis of Recurrent Analog Neural Networks
Z. Raida
1998-06-01
Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.
Analogy Mapping Development for Learning Programming
Sukamto, R. A.; Prabawa, H. W.; Kurniawati, S.
2017-02-01
Programming skill is an important skill for computer science students, whereas nowadays, there many computer science students are lack of skills and information technology knowledges in Indonesia. This is contrary with the implementation of the ASEAN Economic Community (AEC) since the end of 2015 which is the qualified worker needed. This study provided an effort for nailing programming skills by mapping program code to visual analogies as learning media. The developed media was based on state machine and compiler principle and was implemented in C programming language. The state of every basic condition in programming were successful determined as analogy visualization.
Design of analog integrated circuits and systems
Laker, Kenneth R
1994-01-01
This text is designed for senior or graduate level courses in analog integrated circuits or design of analog integrated circuits. This book combines consideration of CMOS and bipolar circuits into a unified treatment. Also included are CMOS-bipolar circuits made possible by BiCMOS technology. The text progresses from MOS and bipolar device modelling to simple one and two transistor building block circuits. The final two chapters present a unified coverage of sample-data and continuous-time signal processing systems.
Factors influencing bonding fixed restorations
Medić Vesna
2008-01-01
Full Text Available INTRODUCTION Crown displacement often occurs because the features of tooth preparations do not counteract the forces directed against restorations. OBJECTIVE The purpose of this study was to evaluate the effect of preparation designs on retention and resistance of fixed restorations. METHOD The study was performed on 64 differently sized stainless steel dies. Also, caps which were used for evaluated retention were made of stainless steel for each die. After cementing the caps on experimental dies, measuring of necessary tensile forces to separate cemented caps from dies was done. Caps, which were made of a silver-palladium alloy with a slope of 60° to the longitudinal axis formed on the occlusal surface, were used for evaluating resistance. A sudden drop in load pressure recorded by the test machine indicated failure for that cap. RESULTS A significant difference was found between the tensile force required to remove the caps from the dies with different length (p<0.05 and different taper (p<0.01. The greatest retentive strengths (2579.2 N and 2989.8 N were noticed in experimental dies with the greatest length and smallest taper. No statistically significant (p>0.05 differences were found between tensile loads for caps cemented on dies with different diameter. Although there was an apparent slight increase in resistance values for caps on dies with smaller tapers, the increase in resistance for those preparation designs was not statistically significant. There was a significant difference among the resistance values for caps on dies with different length (p<0.01 and diameter (p<0.05. CONCLUSION In the light of the results obtained, it could be reasonably concluded that retention and resistance of the restoration is in inverse proportion to convergence angle of the prepared teeth. But, at a constant convergence angle, retention and resistance increase with rising length and diameter.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
High-speed and high-resolution analog-to-digital and digital-to-analog converters
van de Plassche, R.J.
1989-01-01
Analog-to-digital and digital-to-analog converters are important building blocks connecting the analog world of transducers with the digital world of computing, signal processing and data acquisition systems. In chapter two the converter as part of a system is described. Requirements of analog
Wintzer, K.
1977-01-01
Process for analog-to-digital and digital-to-analog conversion in telecommunication systems whose outstations each have an analog transmitter and an analog receiver. The invention illustrates a method of reducing the power demand of the converters at times when no conversion processes take place. (RW) [de
Methane. [biosynthesis from manure or analogous substance
Ducellier, G L.R.; Isman, M A
1949-04-19
CH/sub 4/ is produced by the fermentation of manure or analogous substances in a vat having a dome covering the vat, the lower edge of the dome being immersed in a liquid seal, and the dome being arranged to rise vertically in order to hold the CH/sub 4/ produced.
Analog voicing detector responds to pitch
Abel, R. S.; Watkins, H. E.
1967-01-01
Modified electronic voice encoder /Vocoder/ includes an independent analog mode of operation in addition to the conventional digital mode. The Vocoder is a bandwidth compression equipment that permits voice transmission over channels, having only a fraction of the bandwidth required for conventional telephone-quality speech transmission.
Invention through Form and Function Analogy
Rule, Audrey C.
2015-01-01
"Invention through Form and Function Analogy" is an invention book for teachers and other leaders working with youth who are involving students in the invention process. The book consists of an introduction and set of nine learning cycle formatted lessons for teaching the principles of invention through the science and engineering design…
Formal analogies in physics teacher education
Avelar Sotomaior Karam, Ricardo; Ricardo, Elio
2012-01-01
the exact same appearance. Coulomb’s law’s similarity with Newton’s, Maxwell’s application of fluid theory to electromagnetism and Hamilton’s optical mechanical analogy are some among many other examples. These cases illustrate the power of mathematics in providing unifying structures for physics. Despite...
An iconic, analogical approach to grammaticalization
Fischer, O.; Conradie, C.J.; Johl, R.; Beukes, M.; Fischer, O.; Ljungberg, C.
2010-01-01
This paper addresses a number of problems connected with the ‘apparatus’ used in grammaticalization theory. It will be argued that we get a better grip on what happens in processes of grammaticalization (and its ‘opposite’, lexicalization) if the process is viewed in terms of analogical processes,
Analog and digital dividers for mass spectrometers
Osipov, A.K.
1980-01-01
Errors of four different types of stress dividers used in statical mass-spectrometers for determination of mass number by accelerating stress are analyzed. The simplest flowsheet of the analog divider comprises operation amplifier, in the chain of the negative feedback of which a multiplication device on differential cascade is switched- in. This analog divider has high sensitivity to temperature and high error approximately 5%. Application of the multiplier on differential cascade with normalization permits to increase temperature stability and decrease the error up to 1%. Another type of the analog divider is a logarithmic divider the error of which is constant within the whole operation range and it constitutes 1-5%. The digital divider with a digital-analog transformer (DAT) has the error of +-0.015% which is determined by the error of detectors and resistance of keys in the locked state. Considered is the design of a divider based on transformation of the inlet stress into the time period. The error of the divider is determined in this case mainly by stress of the zero shift of the operation amplifier (it should be compensated) and relative threshold stability of the comparator triggering which equals (2-3)x10 -4 . It is noted that the divider with DAT application and the divider with the use of stress transformation within the time period are most perspective ones for statical mass-spectrometers [ru
Analog Experiment for rootless cone eruption
Noguchi, R.; Hamada, A.; Suzuki, A.; Kurita, K.
2017-09-01
Rootless cone is a unique geomorphological landmark to specify igneous origin of investigated terrane, which is formed by magma-water interaction. To understand its formation mechanism we conducted analog experiment for heat-induced vesiculation by using hot syrup and sodium bicarbonate solution.
SSERVI Analog Regolith Simulant Testbed Facility
Minafra, J.; Schmidt, G. K.
2016-12-01
SSERVI's goals include supporting planetary researchers within NASA, other government agencies; private sector and hardware developers; competitors in focused prize design competitions; and academic sector researchers. The SSERVI Analog Regolith Simulant Testbed provides opportunities for research scientists and engineers to study the effects of regolith analog testbed research in the planetary exploration field. This capability is essential to help to understand the basic effects of continued long-term exposure to a simulated analog test environment. The current facility houses approximately eight tons of JSC-1A lunar regolith simulant in a test bin consisting of a 4 meter by 4 meter area. SSERVI provides a bridge between several groups, joining together researchers from: 1) scientific and exploration communities, 2) multiple disciplines across a wide range of planetary sciences, and 3) domestic and international communities and partnerships. This testbed provides a means of consolidating the tasks of acquisition, storage and safety mitigation in handling large quantities of regolith simulant Facility hardware and environment testing scenarios include, but are not limited to the following; Lunar surface mobility, Dust exposure and mitigation, Regolith handling and excavation, Solar-like illumination, Lunar surface compaction profile, Lofted dust, Mechanical properties of lunar regolith, and Surface features (i.e. grades and rocks) Numerous benefits vary from easy access to a controlled analog regolith simulant testbed, and planetary exploration activities at NASA Research Park, to academia and expanded commercial opportunities in California's Silicon Valley, as well as public outreach and education opportunities.
A physical analogy to fuzzy clustering
Jantzen, Jan
2004-01-01
This tutorial paper provides an interpretation of the membership assignment in the fuzzy clustering algorithm fuzzy c-means. The membership of a data point to several clusters is shown to be analogous to the gravitational forces between bodies of mass. This provides an alternative way to explain...
A high-speed analog neural processor
Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans
1994-01-01
Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight
C4913 ANALOGE OG DIGITALE FILTRE
Gaunholt, Hans
1996-01-01
Theese lecture notes treats the fundamental theory and the most commonly used design methods for passive- active and digital filters with special emphasis on microelectronic realizations. The lecture notes covers 75% of the material taught in the course C4913 Analog and Digital Filters...
Analog circuit design automation for performance
Ning, Zhen-Qiu; Ning, Zhen-Qiu; Kole, Marq; Kole, M.E.; Mouthaan, A.J.; Wallinga, Hans
1992-01-01
This paper describes an improved version of the program SEAS (a Simulated Evolution approach for Analog circuit Synthesis), in which an approach for selection of alternatives based on the evaluation of mutation values is developed, and design automafion for high performance comparators is covered.
Hands Together! An Analog Clock Problem
Earnest, Darrell; Radtke, Susan; Scott, Siri
2017-01-01
In this article, the authors first present the Hands Together! task. The mathematics in this problem concerns the relationship of hour and minute durations as reflected in the oft-overlooked proportional movements of the two hands of an analog clock. The authors go on to discuss the importance of problem solving in general. They then consider…
The GMO-Nanotech (Dis)Analogy?
Sandler, Ronald; Kay, W. D.
2006-01-01
The genetically-modified-organism (GMO) experience has been prominent in motivating science, industry, and regulatory communities to address the social and ethical dimensions of nanotechnology. However, there are some significant problems with the GMO-nanotech analogy. First, it overstates the likelihood of a GMO-like backlash against…
A Mechanical Analogy for Ohm's Law.
do Couto Tavares, Milton; And Others
1991-01-01
A mechanical analogy between the microscopic motion of a charged carrier in an ordinary resistor and the macroscopic motion of a ball falling along a slanted board covered with a lattice of nails is introduced. The Drude model is also introduced to include the case of inelastic collisions. Computer simulation of the motion is described. (KR)
Plasma analog of particle-pair production
Tsidulko, Yu.A.; Berk, H.L.
1996-09-01
It is shown that the plasma axial shear flow instability satisfies the Klein-Gordon equation. The plasma instability is then shown to be analogous to spontaneous particle-pair production when a potential energy is present that is greater than twice the particle rest mass energy. Stability criteria can be inferred based on field theoretical conservation laws
Performance of the Analog Moving Window Detector
Hansen, V. Gregers
1970-01-01
A type of analog integrating moving window detector for use with a scanning pulse radar is examined. A performance analysis is carried out, which takes into account both the radiation pattern of the antenna and the dynamic character of the detection process due to the angular scanning...
Insulin analogs with improved pharmacokinetic profiles.
Brange; Vølund
1999-02-01
The aim of insulin replacement therapy is to normalize blood glucose in order to reduce the complications of diabetes. The pharmacokinetics of the traditional insulin preparations, however, do not match the profiles of physiological insulin secretion. The introduction of the rDNA technology 20 years ago opened new ways to create insulin analogs with altered properties. Fast-acting analogs are based on the idea that an insulin with less tendency to self-association than human insulin would be more readily absorbed into the systemic circulation. Protracted-acting analogs have been created to mimic the slow, steady rate of insulin secretion in the fasting state. The present paper provides a historical review of the efforts to change the physicochemical and pharmacological properties of insulin in order to improve insulin therapy. The available clinical studies of the new insulins are surveyed and show, together with modeling results, that new strategies for optimal basal-bolus treatment are required for utilization of the new fast-acting analogs.
Bootstrapped Low-Voltage Analog Switches
Steensgaard-Madsen, Jesper
1999-01-01
Novel low-voltage constant-impedance analog switch circuits are proposed. The switch element is a single MOSFET, and constant-impedance operation is obtained using simple circuits to adjust the gate and bulk voltages relative to the switched signal. Low-voltage (1-volt) operation is made feasible...
Magnetic Fixed Points and Emergent Supersymmetry
Antipin, Oleg; Mojaza, Matin; Pica, Claudio
2013-01-01
We establish in perturbation theory the existence of fixed points along the renormalization group flow for QCD with an adjoint Weyl fermion and scalar matter reminiscent of magnetic duals of QCD [1-3]. We classify the fixed points by analyzing their basin of attraction. We discover that among...
Fixed Wireless may be a temporary answer
Possible to enhance throughput by 4 with respect to Mobile Wireless. And get 8 to 10 bps / Hz / cell; Examples: BB corDECT: today provides 256/512kbps to each connection in fixed environment. Ideal for small town / rural Broadband. Fixed 802.16d/e does the same in but at much higher price-points.
Metallic and antiferromagnetic fixed points from gravity
Paul, Chandrima
2018-06-01
We consider SU(2) × U(1) gauge theory coupled to matter field in adjoints and study RG group flow. We constructed Callan-Symanzik equation and subsequent β functions and study the fixed points. We find there are two fixed points, showing metallic and antiferromagnetic behavior. We have shown that metallic phase develops an instability if certain parametric conditions are satisfied.
Gaining Insight into an Organization's Fixed Assets.
Hardy, Elisabet
2003-01-01
Discusses issues related to school district implementation of June 2001 Government Accounting Standards Board (GASB) Statement 34 designed to change how schools report fixed assets. Includes planning for GASB implementation, conducting fixed-asset inventories, and making time for GASB reporting. (PKP)
78 FR 20705 - Fixed Income Roundtable
2013-04-05
... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-69275; File No. 4-660] Fixed Income Roundtable... of fixed income markets. The roundtable will focus on the municipal securities, corporate bonds, and asset-backed securities markets. The roundtable discussion will be held in the multi-purpose room of the...
Gauge fixing problem in the conformal QED
Ichinose, Shoichi
1986-01-01
The gauge fixing problem in the conformal (spinor and scalar) QED is examined. For the analysis, we generalize Dirac's manifestly conformal-covariant formalism. It is shown that the (vector and matter) fields must obey a certain mixed (conformal and gauge) type of transformation law in order to fix the local gauge symmetry preserving the conformal invariance in the Lagrangian. (orig.)
Fixed export cost heterogeneity, trade and welfare
Jørgensen, Jan Guldager; Schröder, Philipp J.H.
2008-01-01
-country intra-industry trade model where firms are of two different marginal costs types and where fixed export costs are heterogeneous across firms. This model traces many of the stylized facts of international trade. However, we find that with heterogeneous fixed export costs there exists a positive bilateral...
Impact of fixed-mobile convergence
Pachnicke, Stephan; Andrus, Bogdan-Mihai; Autenrieth, Achim
2016-01-01
Fixed-Mobile Convergence (FMC) is a very trendy concept as it promises integration of the previously separated fixed access network and the mobile network. From this novel approach telecommunication operators expect significant cost savings and performance improvements. FMC can be separated...
Considerations on the establishment of maximum permissible exposure of man
Jacobi, W.
1974-01-01
An attempt is made in the information lecture to give a quantitative analysis of the somatic radiation risk and to illustrate a concept to fix dose limiting values. Of primary importance is the limiting values. Of primary importance is the limiting value of the radiation exposure to the whole population. By consequential application of the risk concept, the following points are considered: 1) Definition of the risk for radiation late damages (cancer, leukemia); 2) relationship between radiation dose and thus caused radiation risk; 3) radiation risk and the dose limiting values at the time; 4) criteria for the maximum acceptable radiation risk; 5) limiting value which can be expected at the time. (HP/LH) [de
Attitude sensor alignment calibration for the solar maximum mission
Pitone, Daniel S.; Shuster, Malcolm D.
1990-01-01
An earlier heuristic study of the fine attitude sensors for the Solar Maximum Mission (SMM) revealed a temperature dependence of the alignment about the yaw axis of the pair of fixed-head star trackers relative to the fine pointing Sun sensor. Here, new sensor alignment algorithms which better quantify the dependence of the alignments on the temperature are developed and applied to the SMM data. Comparison with the results from the previous study reveals the limitations of the heuristic approach. In addition, some of the basic assumptions made in the prelaunch analysis of the alignments of the SMM are examined. The results of this work have important consequences for future missions with stringent attitude requirements and where misalignment variations due to variations in the temperature will be significant.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Maximum wind energy extraction strategies using power electronic converters
Wang, Quincy Qing
2003-10-01
This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through
Global gauge fixing in lattice gauge theories
Fachin, S.; Parrinello, C. (Physics Department, New York University, 4 Washington Place, New York, New York (USA))
1991-10-15
We propose a covariant, nonperturbative gauge-fixing procedure for lattice gauge theories that avoids the problem of Gribov copies. This is closely related to a recent proposal for a gauge fixing in the continuum that we review. The lattice gauge-fixed model allows both analytical and numerical investigations: on the analytical side, explicit nonperturbative calculations of gauge-dependent quantities can be easily performed in the framework of a generalized strong-coupling expansion, while on the numerical side a stochastic gauge-fixing algorithm is very naturally associated with the scheme. In both applications one can study the gauge dependence of the results, since the model actually provides a smooth'' family of gauge-fixing conditions.
Algorithms for solving common fixed point problems
Zaslavski, Alexander J
2018-01-01
This book details approximate solutions to common fixed point problems and convex feasibility problems in the presence of perturbations. Convex feasibility problems search for a common point of a finite collection of subsets in a Hilbert space; common fixed point problems pursue a common fixed point of a finite collection of self-mappings in a Hilbert space. A variety of algorithms are considered in this book for solving both types of problems, the study of which has fueled a rapidly growing area of research. This monograph is timely and highlights the numerous applications to engineering, computed tomography, and radiation therapy planning. Totaling eight chapters, this book begins with an introduction to foundational material and moves on to examine iterative methods in metric spaces. The dynamic string-averaging methods for common fixed point problems in normed space are analyzed in Chapter 3. Dynamic string methods, for common fixed point problems in a metric space are introduced and discussed in Chapter ...
Analog-to-digital conversion using custom CMOS analog memory for the EOS time projection chamber
Lee, K.L.; Arthur, A.A.; Jones, R.W.; Matis, H.S.; Nakamura, M.; Kleinfelder, S.A.; Ritter, H.G.; Wienman, H.H.
1990-01-01
This paper describes the multiplexing scheme of custom CMOS analog memory integrated circuits, 16 channels x 256 cells, into analog to digital converters (ADC's) to handle 15,360 signal channels of a time projection, chamber detector system. Primary requirements of this system are high density, low power and large dynamic range. The analog memory device multiplexing scheme was designed to digitize the information stored in the memory cells. The digitization time of the ADC's and the settling times for the memory unit were carefully interleaved to optimize the performance and timing during the multiplexing operation. This kept the total number of ADC's, a costly and power dissipative component, to an acceptable minimum
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Future evolution in a backreaction model and the analogous scalar field cosmology
Ali, Amna; Majumdar, A.S., E-mail: amnaalig@gmail.com, E-mail: archan@bose.res.in [S. N. Bose National Centre for Basic Sciences, Block JD, Sector-III, Salt Lake, Kolkata 700106 (India)
2017-01-01
We investigate the future evolution of the universe using the Buchert framework for averaged backreaction in the context of a two-domain partition of the universe. We show that this approach allows for the possibility of the global acceleration vanishing at a finite future time, provided that none of the subdomains accelerate individually. The model at large scales is analogously described in terms of a homogeneous scalar field emerging with a potential that is fixed and free from phenomenological parametrization. The dynamics of this scalar field is explored in the analogous FLRW cosmology. We use observational data from Type Ia Supernovae, Baryon Acoustic Oscillations, and Cosmic Microwave Background to constrain the parameters of the model for a viable cosmology, providing the corresponding likelihood contours.
13 CFR 120.213 - What fixed interest rates may a Lender charge?
2010-01-01
... Lender charge? 120.213 Section 120.213 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION... have a reasonable fixed interest rate. SBA periodically publishes the maximum allowable rate in the... government determines the interest rate on direct loans. SBA publishes the rate periodically in the Federal...
Microwave multiphoton excitation of helium Rydberg atoms: The analogy with atomic collisions
van de Water, W.; van Leeuwen, K.A.H.; Yoakum, S.; Galvez, E.J.; Moorman, L.; Bergeman, T.; Sauer, B.E.; Koch, P.M.
1989-01-01
We study multiphoton transitions in helium Rydberg atoms subjected to a microwave electric field of fixed frequency but varying intensity. For each principal quantum number in the range n=25--32, the n 3 S to n 3 (L>2), n=25--32, transition probability exhibits very sharp structures as a function of the field amplitude. Their positions could be reproduced precisely using a Floquet Hamiltonian for the interaction between atom and field. Their shapes are determined by the transients of field turn-on and turn-off in a way that makes a close analogy with the theory of slow atomic collisions
Embedded calibration system for the DIII-D Langmuir probe analog fiber optic links
Watkins, J. G.; Rajpal, R.; Mandaliya, H.; Watkins, M.; Boivin, R. L.
2012-01-01
This paper describes a generally applicable technique for simultaneously measuring offset and gain of 64 analog fiber optic data links used for the DIII-D fixed Langmuir probes by embedding a reference voltage waveform in the optical transmitted signal before every tokamak shot. The calibrated data channels allow calibration of the power supply control fiber optic links as well. The array of fiber optic links and the embedded calibration system described here makes possible the use of superior modern data acquisition electronics in the control room.
Multilateral Research Opportunities in Ground Analogs
Corbin, Barbara J.
2015-01-01
The global economy forces many nations to consider their national investments and make difficult decisions regarding their investment in future exploration. International collaboration provides an opportunity to leverage other nations' investments to meet common goals. The Humans In Space Community shares a common goal to enable safe, reliable, and productive human space exploration within and beyond Low Earth Orbit. Meeting this goal requires efficient use of limited resources and International capabilities. The International Space Station (ISS) is our primary platform to conduct microgravity research targeted at reducing human health and performance risks for exploration missions. Access to ISS resources, however, is becoming more and more constrained and will only be available through 2020 or 2024. NASA's Human Research Program (HRP) is actively pursuing methods to effectively utilize the ISS and appropriate ground analogs to understand and mitigate human health and performance risks prior to embarking on human exploration of deep space destinations. HRP developed a plan to use ground analogs of increasing fidelity to address questions related to exploration missions and is inviting International participation in these planned campaigns. Using established working groups and multilateral panels, the HRP is working with multiple Space Agencies to invite International participation in a series of 30- day missions that HRP will conduct in the US owned and operated Human Exploration Research Analog (HERA) during 2016. In addition, the HRP is negotiating access to Antarctic stations (both US and non-US), the German :envihab and Russian NEK facilities. These facilities provide unique capabilities to address critical research questions requiring longer duration simulation or isolation. We are negotiating release of international research opportunities to ensure a multilateral approach to future analog research campaigns, hoping to begin multilateral campaigns in the
Using Visual Analogies To Teach Introductory Statistical Concepts
Jessica S. Ancker
2017-07-01
Full Text Available Introductory statistical concepts are some of the most challenging to convey in quantitative literacy courses. Analogies supplemented by visual illustrations can be highly effective teaching tools. This literature review shows that to exploit the power of analogies, teachers must select analogies familiar to the audience, explicitly link the analog with the target concept, and avert misconceptions by explaining where the analogy fails. We provide guidance for instructors and a series of visual analogies for use in teaching medical and health statistics.
MXLKID: a maximum likelihood parameter identifier
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Krzemien, Magali; Jemel, Boutheina; Maillart, Christelle
2017-01-01
Analogical reasoning is a human ability that maps systems of relations. It develops along with relational knowledge, working memory and executive functions such as inhibition. It also maintains a mutual influence on language development. Some authors have taken a greater interest in the analogical reasoning ability of children with language disorders, specifically those with specific language impairment (SLI). These children apparently have weaker analogical reasoning abilities than their aged-matched peers without language disorders. Following cognitive theories of language acquisition, this deficit could be one of the causes of language disorders in SLI, especially those concerning productivity. To confirm this deficit and its link to language disorders, we use a scene analogy task to evaluate the analogical performance of SLI children and compare them to controls of the same age and linguistic abilities. Results show that children with SLI perform worse than age-matched peers, but similar to language-matched peers. They are more influenced by increased task difficulty. The association between language disorders and analogical reasoning in SLI can be confirmed. The hypothesis of limited processing capacity in SLI is also being considered.
About Applications of the Fixed Point Theory
Bucur Amelia
2017-06-01
Full Text Available The fixed point theory is essential to various theoretical and applied fields, such as variational and linear inequalities, the approximation theory, nonlinear analysis, integral and differential equations and inclusions, the dynamic systems theory, mathematics of fractals, mathematical economics (game theory, equilibrium problems, and optimisation problems and mathematical modelling. This paper presents a few benchmarks regarding the applications of the fixed point theory. This paper also debates if the results of the fixed point theory can be applied to the mathematical modelling of quality.
The 1994 Fermilab Fixed Target Program
Conrad, J.
1994-11-01
This paper highlights the results of the Fermilab Fixed Target Program that were announced between October, 1993 and October, 1994. These results are drawn from 18 experiments that took data in the 1985, 1987 and 1990/91 fixed target running periods. For this discussion, the Fermilab Fixed Target Program is divided into 5 major topics: hadron structure, precision electroweak measurements, heavy quark production, polarization and magnetic moments, and searches for new phenomena. However, it should be noted that most experiments span several subtopics. Also, measurements within each subtopic often affect the results in other subtopics. For example, parton distributions from hadron structure measurements are used in the studies of heavy quark production
Hybrid fixed point in CAT(0 spaces
Hemant Kumar Pathak
2018-02-01
Full Text Available In this paper, we introduce an ultrapower approach to prove fixed point theorems for $H^{+}$-nonexpansive multi-valued mappings in the setting of CAT(0 spaces and prove several hybrid fixed point results in CAT(0 spaces for families of single-valued nonexpansive or quasinonexpansive mappings and multi-valued upper semicontinuous, almost lower semicontinuous or $H^{+}$-nonexpansive mappings which are weakly commuting. We also establish a result about structure of the set of fixed points of $H^{+}$-quasinonexpansive mapping on a CAT(0 space.
Maximum power analysis of photovoltaic module in Ramadi city
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Analog data transmission via fiber optics
Cisneros, E.L.; Burgueno, G.F.
1986-10-01
In the SLAC Linear Collider Detector (SLD), as in most high-energy particle detectors, the electromagnetic noise environment is the limiting factor in electronic readout performance. Front-end electronics are particulary susceptible to electromagnetic interference (EMI), and great care has been taken to minimize its effects. The transfer of preprocessed analog signals from the detector environs, to the remote digital processing electronics, by conventional means (via metal conductors), may ultimately limit the performance of the system. Because it is highly impervious to EMI and ground loops, a fiber-optic medium has been chosen for the transmission of these signals. This paper describes several fiber-optic transmission schemes which satisfy the requirements of the SLD analog data transmission
Analog data transmission via fiber optics
Cisneros, E.L.; Burgueno, G.F.
1987-01-01
In the SLAC Linear Collider Detector (SLD), as in most high-energy particle detectors, the electromagnetic noise environment is the limiting factor in electronic readout performance. Front-end electronics are particularly susceptible to electromagnetic interference (EMI), and great care has been taken to minimize its effects. The transfer of preprocessed analog signals from the detector environs, to the remote digital processing electronics, by conventional means (via metal conductors), may ultimately limit the performance of the system. Because it is highly impervious to EMI and ground loops, a fiber-optic medium has been chosen for the transmission of these signals. This paper describes several fiber-optic transmission schemes which satisfy the requirements of the SLD analog data transmission
An analogy strategy for transformation optics
Yao, Kan; Liu, Yongmin; Chen, Huanyang; Jiang, Xunya
2014-01-01
We introduce an analogy strategy to design transformation optical devices. Based on the similarities between field lines in different physical systems, the trajectories of light can be intuitively determined to curve in a gentle manner, and the resulting materials are isotropic and nonmagnetic. Furthermore, the physical meaning of the analogue problems plays a key role in the removal of dielectric singularities. We illustrate this approach by creating two designs of carpet cloak and a collimating lens as representative examples in two- and three-dimensional spaces, respectively. The analogy strategy not only reveals the intimate connections between different physical disciplines, such as optics, fluid mechanics and electrostatics, but also provides a heuristic pathway to designing advanced photonic systems
The optical analogy for vector fields
Parker, E. N. (Editor)
1991-01-01
This paper develops the optical analogy for a general vector field. The optical analogy allows the examination of certain aspects of a vector field that are not otherwise readily accessible. In particular, in the cases of a stationary Eulerian flow v of an ideal fluid and a magnetostatic field B, the vectors v and B have surface loci in common with their curls. The intrinsic discontinuities around local maxima in absolute values of v and B take the form of vortex sheets and current sheets, respectively, the former playing a fundamental role in the development of hydrodyamic turbulence and the latter playing a major role in heating the X-ray coronas of stars and galaxies.
High resolution tomography using analog coding
Brownell, G.L.; Burnham, C.A.; Chesler, D.A.
1985-01-01
As part of a 30-year program in the development of positron instrumentation, the authors have developed a high resolution bismuth germanate (BGO) ring tomography (PCR) employing 360 detectors and 90 photomultiplier tubes for one plane. The detectors are shaped as trapezoid and are 4 mm wide at the front end. When assembled, they form an essentially continuous cylindrical detector. Light from a scintillation in the detector is viewed through a cylindrical light pipe by the photomultiplier tubes. By use of an analog coding scheme, the detector emitting light is identified from the phototube signals. In effect, each phototube can identify four crystals. PCR is designed as a static device and does not use interpolative motion. This results in considerable advantage when performing dynamic studies. PCR is the positron tomography analog of the γ-camera widely used in nuclear medicine
Optimal neural computations require analog processors
Beiu, V.
1998-12-31
This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).
Maximum neutron flux in thermal reactors
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
An introduction to analog and digital communications
Haykin, Simon
2012-01-01
The second edition of this accessible book provides readers with an introductory treatment of communication theory as applied to the transmission of information-bearing signals. While it covers analog communications, the emphasis is placed on digital technology. It begins by presenting the functional blocks that constitute the transmitter and receiver of a communication system. Readers will next learn about electrical noise and then progress to multiplexing and multiple access techniques.
Pyrrolidine nucleotide analogs with a tunable conformation
Poštová Slavětínská, Lenka; Rejman, Dominik; Pohl, Radek
2014-01-01
Roč. 10, Aug 22 (2014), s. 1967-1980 ISSN 1860-5397 R&D Projects: GA ČR GA13-24880S Institutional support: RVO:61388963 Keywords : conformation * NMR * nucleic acids * nucleotide analog * phosphonic acid * pseudorotation * pyrrolidine Subject RIV: CC - Organic Chemistry Impact factor: 2.762, year: 2014 http://www.beilstein-journals.org/bjoc/single/articleFullText.htm?publicId=1860-5397-10-205
Hans W Paerl
Full Text Available Excessive anthropogenic nitrogen (N and phosphorus (P inputs have caused an alarming increase in harmful cyanobacterial blooms, threatening sustainability of lakes and reservoirs worldwide. Hypertrophic Lake Taihu, China's third largest freshwater lake, typifies this predicament, with toxic blooms of the non-N2 fixing cyanobacteria Microcystis spp. dominating from spring through fall. Previous studies indicate N and P reductions are needed to reduce bloom magnitude and duration. However, N reductions may encourage replacement of non-N2 fixing with N2 fixing cyanobacteria. This potentially counterproductive scenario was evaluated using replicate, large (1000 L, in-lake mesocosms during summer bloom periods. N+P additions led to maximum phytoplankton production. Phosphorus enrichment, which promoted N limitation, resulted in increases in N2 fixing taxa (Anabaena spp., but it did not lead to significant replacement of non-N2 fixing with N2 fixing cyanobacteria, and N2 fixation rates remained ecologically insignificant. Furthermore, P enrichment failed to increase phytoplankton production relative to controls, indicating that N was the most limiting nutrient throughout this period. We propose that Microcystis spp. and other non-N2 fixing genera can maintain dominance in this shallow, highly turbid, nutrient-enriched lake by outcompeting N2 fixing taxa for existing sources of N and P stored and cycled in the lake. To bring Taihu and other hypertrophic systems below the bloom threshold, both N and P reductions will be needed until the legacy of high N and P loading and sediment nutrient storage in these systems is depleted. At that point, a more exclusive focus on P reductions may be feasible.
Biophysical and lipofection studies of DOTAP analogs.
Regelin, A E; Fankhaenel, S; Gürtesch, L; Prinz, C; von Kiedrowski, G; Massing, U
2000-03-15
In order to investigate the relationship between lipid structure and liposome-mediated gene transfer, we have studied biophysical parameters and transfection properties of monocationic DOTAP analogs, systematically modified in their non-polar hydrocarbon chains. Stability, size and (by means of anisotropy profiles) membrane fluidity of liposomes and lipoplexes were determined, and lipofection efficiency was tested in a luciferase reporter gene assay. DOTAP analogs were used as single components or combined with a helper lipid, either DOPE or cholesterol. Stability of liposomes was a precondition for formation of temporarily stable lipoplexes. Addition of DOPE or cholesterol improved liposome and lipoplex stability. Transfection efficiencies of lipoplexes based on pure DOTAP analogs could be correlated with stability data and membrane fluidity at transfection temperature. Inclusion of DOPE led to rather uniform transfection and anisotropy profiles, corresponding to lipoplex stability. Cholesterol-containing lipoplexes were generally stable, showing high transfection efficiency at low relative fluidity. Our results demonstrate that the efficiency of gene transfer mediated by monocationic lipids is greatly influenced by lipoplex biophysics due to lipid composition. The measurement of fluorescence anisotropy is an appropriate method to characterize membrane fluidity within a defined system of liposomes or lipoplexes and may be helpful to elucidate structure-activity relationships.
Targeting thyroid diseases with TSH receptor analogs.
Galofré, Juan C; Chacón, Ana M; Latif, Rauf
2013-12-01
The thyroid-stimulating hormone (TSH) receptor (TSHR) is a major regulator of thyroid function and growth, and is the key antigen in several pathological conditions including hyperthyroidism, hypothyroidism, and thyroid tumors. Various effective treatment strategies are currently available for many of these clinical conditions such as antithyroid drugs or radioiodine therapy, but they are not devoid of side effects. In addition, treatment of complications of Graves' disease such as Graves' ophthalmopathy is often difficult and unsatisfactory using current methods. Recent advances in basic research on both in vitro and in vivo models have suggested that TSH analogs could be used for diagnosis and treatment of some of the thyroid diseases. The advent of high-throughput screening methods has resulted in a group of TSH analogs called small molecules, which have the potential to be developed as promising drugs. Small molecules are low molecular weight compounds with agonist, antagonist and, in some cases, inverse agonist activity on TSHR. This short review will focus on current advances in development of TSH analogs and their potential clinical applications. Rapid advances in this field may lead to the conduct of clinical trials of small molecules related to TSHR for the management of Graves' disease, thyroid cancer, and thyroid-related osteoporosis in the coming years. Copyright © 2012 SEEN. Published by Elsevier Espana. All rights reserved.
Synthetic analog computation in living cells.
Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K
2013-05-30
A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.
Theory of analogous force on number sets
Canessa, Enrique [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)
2003-08-01
A general statistical thermodynamic theory that considers given sequences of x-integers to play the role of particles of known type in an isolated elastic system is proposed. By also considering some explicit discrete probability distributions p{sub x} for natural numbers, we claim that they lead to a better understanding of probabilistic laws associated with number theory. Sequences of numbers are treated as the size measure of finite sets. By considering p{sub x} to describe complex phenomena, the theory leads to derive a distinct analogous force f{sub x} on number sets proportional to ({partial_derivative}p{sub x}/{partial_derivative}x){sub T} at an analogous system temperature T. In particular, this yields to an understanding of the uneven distribution of integers of random sets in terms of analogous scale invariance and a screened inverse square force acting on the significant digits. The theory also allows to establish recursion relations to predict sequences of Fibonacci numbers and to give an answer to the interesting theoretical question of the appearance of the Benford's law in Fibonacci numbers. A possible relevance to prime numbers is also analyzed. (author)
Periglacial and glacial analogs for Martian landforms
Rossbacher, Lisa A.
1992-01-01
The list of useful terrestrial analogs for Martian landforms has been expanded to include: features developed by desiccation processes; catastrophic flood features associated with boulder-sized materials; and sorted ground developed at a density boundary. Quantitative analytical techniques developed for physical geography have been adapted and applied to planetary studies, including: quantification of the patterns of polygonally fractured ground to describe pattern randomness independent of pattern size, with possible correlation to the mechanism of origin and quantification of the relative area of a geomorphic feature or region in comparison to planetary scale. Information about Martian geomorphology studied in this project was presented at professional meetings world-wide, at seven colleges and universities, in two interactive televised courses, and as part of two books. Overall, this project has expanded the understanding of the range of terrestrial analogs for Martian landforms, including identifying several new analogs. The processes that created these terrestrial features are characterized by both cold temperatures and low humidity, and therefore both freeze-thaw and desiccation processes are important. All these results support the conclusion that water has played a significant role in the geomorphic history of Mars.
Sensing Methods for Detecting Analog Television Signals
Rahman, Mohammad Azizur; Song, Chunyi; Harada, Hiroshi
This paper introduces a unified method of spectrum sensing for all existing analog television (TV) signals including NTSC, PAL and SECAM. We propose a correlation based method (CBM) with a single reference signal for sensing any analog TV signals. In addition we also propose an improved energy detection method. The CBM approach has been implemented in a hardware prototype specially designed for participating in Singapore TV white space (WS) test trial conducted by Infocomm Development Authority (IDA) of the Singapore government. Analytical and simulation results of the CBM method will be presented in the paper, as well as hardware testing results for sensing various analog TV signals. Both AWGN and fading channels will be considered. It is shown that the theoretical results closely match with those from simulations. Sensing performance of the hardware prototype will also be presented in fading environment by using a fading simulator. We present performance of the proposed techniques in terms of probability of false alarm, probability of detection, sensing time etc. We also present a comparative study of the various techniques.
Methodology for performing surveys for fixed contamination
Durham, J.S.; Gardner, D.L.
1994-10-01
This report describes a methodology for performing instrument surveys for fixed contamination that can be used to support the release of material from radiological areas, including release to controlled areas and release from radiological control. The methodology, which is based on a fast scan survey and a series of statistical, fixed measurements, meets the requirements of the U.S. Department of Energy Radiological Control Manual (RadCon Manual) (DOE 1994) and DOE Order 5400.5 (DOE 1990) for surveys for fixed contamination and requires less time than a conventional scan survey. The confidence interval associated with the new methodology conforms to the draft national standard for surveys. The methodology that is presented applies only to surveys for fixed contamination. Surveys for removable contamination are not discussed, and the new methodology does not affect surveys for removable contamination
HEALTH INSURANCE: FIXED CONTRIBUTION AND REIMBURSEMENT MAXIMA
Human Resources Division
2001-01-01
Affected by the salary adjustments on 1 January 2001 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maxima, has changed significantly. An adjustment of the amounts of the reimbursement maxima and the fixed contributions is therefore necessary, as from 1 January 2001. Reimbursement maxima The revised reimbursement maxima will appear on the leaflet summarizing the benefits for the year 2001, which will be sent out with the forthcoming issue of the CHIS Bull'. This leaflet will also be available from the divisional secretariats and from the UNIQA office at CERN. Fixed contributions The fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions) : voluntarily insured member of the personnel, with normal health insurance cover : 910.- (was 815.- in 2000) voluntarily insured member of the personnel, with reduced heal...
Canziani, R.
1999-01-01
Recently, full scale fixed-film or mixed suspended and fixed biomass bioreactors have been applied in many wastewater treatments plants. These process no longer depend on biomass settle ability and can be used to improve the performance of existing plants as required by more stringent discharge permit limits, especially for nutrients and suspended solid. Also, processes may work at high rates making it possible to build small footprint installations. Fixed-film process include trickling filter, moving bed reactors fluidized bed reactors. In the first part, the theoretical base governing fixed-film processes are briefly outlined with some simple examples of calculations underlining the main differences with conventional activated sludge processes [it
Canziani, R.
1999-01-01
Recently, full scale fixed-film or mixed suspended have been applied in many wastewater treatments plants. These processes no longer depend on biomass settle ability and can be used to improve the performance of existing plants as required by more stringent discharge permit limits, especially for nutrients suspended solids. Also, processes may work at high rates making is possible to build small footprint installations. Fixed-film processes include trickling filters (and combined suspended and fixed-films processes), rotating biological contactors, biological aerated submerged, filters moving bed reactors, fluidized bed reactors. In the first part, the theoretical based governing fixed-film processes are briefly outlined, with some simple examples of calculations, underlining the main differences with conventional activate sludge processes. In the second part, the most common types of reactors are reviewed [it
Actinorhizal nitrogen fixing nodules: infection process, molecular ...
Actinorhizal nitrogen fixing nodules: infection process, molecular biology and genomics. Mariana Obertello, Mame Oureye SY, Laurent Laplaze, Carole Santi, Sergio Svistoonoff, Florence Auguy, Didier Bogusz, Claudine Franche ...
FIXING HEALTH SYSTEMS / Executive Summary (2008 update ...
2010-12-14
Dec 14, 2010 ... FIXING HEALTH SYSTEMS / Executive Summary (2008 update) ... In several cases, specific approaches recommended by the TEHIP team have been acted upon regionally and internationally, including the ... Related articles ...
Fixed-Point Configurable Hardware Components
Rocher Romuald
2006-01-01
Full Text Available To reduce the gap between the VLSI technology capability and the designer productivity, design reuse based on IP (intellectual properties is commonly used. In terms of arithmetic accuracy, the generated architecture can generally only be configured through the input and output word lengths. In this paper, a new kind of method to optimize fixed-point arithmetic IP has been proposed. The architecture cost is minimized under accuracy constraints defined by the user. Our approach allows exploring the fixed-point search space and the algorithm-level search space to select the optimized structure and fixed-point specification. To significantly reduce the optimization and design times, analytical models are used for the fixed-point optimization process.
Anderson Acceleration for Fixed-Point Iterations
Walker, Homer F. [Worcester Polytechnic Institute, MA (United States)
2015-08-31
The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.
Topological fixed point theory of multivalued mappings
Górniewicz, Lech
1999-01-01
This volume presents a broad introduction to the topological fixed point theory of multivalued (set-valued) mappings, treating both classical concepts as well as modern techniques. A variety of up-to-date results is described within a unified framework. Topics covered include the basic theory of set-valued mappings with both convex and nonconvex values, approximation and homological methods in the fixed point theory together with a thorough discussion of various index theories for mappings with a topologically complex structure of values, applications to many fields of mathematics, mathematical economics and related subjects, and the fixed point approach to the theory of ordinary differential inclusions. The work emphasises the topological aspect of the theory, and gives special attention to the Lefschetz and Nielsen fixed point theory for acyclic valued mappings with diverse compactness assumptions via graph approximation and the homological approach. Audience: This work will be of interest to researchers an...
Fixed points of occasionally weakly biased mappings
Y. Mahendra Singh, M. R. Singh
2012-01-01
Common fixed point results due to Pant et al. [Pant et al., Weak reciprocal continuity and fixed point theorems, Ann Univ Ferrara, 57(1), 181-190 (2011)] are extended to a class of non commuting operators called occasionally weakly biased pair[ N. Hussain, M. A. Khamsi A. Latif, Commonfixed points for JH-operators and occasionally weakly biased pairs under relaxed conditions, Nonlinear Analysis, 74, 2133-2140 (2011)]. We also provideillustrative examples to justify the improvements. Abstract....
Optimal Licensing Strategy: Royalty or Fixed Fee?
Andrea Fosfuri; Esther Roca
2004-01-01
Licensing a cost-reducing innovation through a royalty has been shown to be superior to licensing by means of a fixed fee for an incumbent licensor. This note shows that this result relies crucially on the assumption that the incumbent licensor can sell its cost-reducing inno-vation to all industry players. If, for any reason, only some competitors could be reached through a licensing contract, then a fixed fee might be optimally chosen.
Fractal Structures For Fixed Mems Capacitors
Elshurafa, Amro M.
2014-08-28
An embodiment of a fractal fixed capacitor comprises a capacitor body in a microelectromechanical system (MEMS) structure. The capacitor body has a first plate with a fractal shape separated by a horizontal distance from a second plate with a fractal shape. The first plate and the second plate are within the same plane. Such a fractal fixed capacitor further comprises a substrate above which the capacitor body is positioned.
On BLM scale fixing in exclusive processes
Anikin, I.V.; Pire, B.; Szymanowski, L.; Teryaev, O.V.; Wallon, S.
2005-01-01
We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x B . We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)
On BLM scale fixing in exclusive processes
Anikin, I.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Universite Paris-Sud, LPT, Orsay (France); Pire, B. [Ecole Polytechnique, CPHT, Palaiseau (France); Szymanowski, L. [Soltan Institute for Nuclear Studies, Warsaw (Poland); Univ. de Liege, Inst. de Physique, Liege (Belgium); Teryaev, O.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Wallon, S. [Universite Paris-Sud, LPT, Orsay (France)
2005-07-01
We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x{sub B}. We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)
Finite volume QCD at fixed topological charge
Aoki, Sinya; Fukaya, Hidenori; Hashimoto, Shoji; Onogi, Tetsuya
2007-01-01
In finite volume the partition function of QCD with a given $\\theta$ is a sum of different topological sectors with a weight primarily determined by the topological susceptibility. If a physical observable is evaluated only in a fixed topological sector, the result deviates from the true expectation value by an amount proportional to the inverse space-time volume 1/V. Using the saddle point expansion, we derive formulas to express the correction due to the fixed topological charge in terms of...
Fixed target physics at high energies
Kirk, T.B.
1984-01-01
The number and type of fixed target experiments that can be pursued at a proton synchrotron are very large. The advent of the Fermilab superconducting accelerator, the Tevatron, will extend and improve the results which are given here from recent CERN and Fermilab experiments. The sample of experiments given in this paper is neither meant to be inclusive nor intensive. Hopefully, it will give the flavor of contemporary fixed target physics to a predominantly cosmic ray oriented audience. (author)
Characterizations of fixed points of quantum operations
Li Yuan
2011-01-01
Let φ A be a general quantum operation. An operator B is said to be a fixed point of φ A , if φ A (B)=B. In this note, we shall show conditions under which B, a fixed point φ A , implies that B is compatible with the operation element of φ A . In particular, we offer an extension of the generalized Lueders theorem.
Caruso, Raul
2007-01-01
The phenomenon of match-fixing does constitute a constant element of sport contests. This paper presents a simple formal model in order to explain it. The intuition behind is that an asymmetry in the evaluation of the stake is the key factor leading to match-fixing. In sum, this paper considers a partial equilibrium model of contest where two asymmetric, rational and risk-neutral opponents evaluate differently a contested stake. Differently from common contest models, agents have the option ...
Fractal Structures For Fixed Mems Capacitors
Elshurafa, Amro M.; Radwan, Ahmed Gomaa Ahmed; Emira, Ahmed A.; Salama, Khaled N.
2014-01-01
An embodiment of a fractal fixed capacitor comprises a capacitor body in a microelectromechanical system (MEMS) structure. The capacitor body has a first plate with a fractal shape separated by a horizontal distance from a second plate with a fractal shape. The first plate and the second plate are within the same plane. Such a fractal fixed capacitor further comprises a substrate above which the capacitor body is positioned.
Creative Analogy Use in a Heterogeneous Design Team
Christensen, Bo; Ball, Linden J.
2016-01-01
the design dialogue derived from team members with highly disparate educational backgrounds. Our analyses revealed that analogies that matched (versus mismatched) educational backgrounds were generated and revisited more frequently, presumably because they were more accessible. Matching analogies were also...
Scientific Analogies and Their Use in Teaching Science
Kipnis, Nahum
Analogy in science knew its successes and failures, as illustrated by examples from the eighteenth-century physics. At times, some scientists abstained from using a certain analogy on the ground that it had not yet been demonstrated. Several false discoveries in the 18th and early 19th centuries appeared to support their caution. It is now clear that such a position reflected a methodological confusion that resulted from a failure to distinguish between particular and general analogies. Considering analogy as a hierarchical structure provides a new insight into "testing an analogy". While warning science teachers of dangers associated with use of analogy, historical cases and their analysis provided here may encourage them to use analogy more extensively while avoiding misconceptions. An argument is made that the history of science may be a better guide than philosophy of science and cognitive psychology when it concerns the role of analogy in science and in teaching science for understanding.
Maximum entropy principle and hydrodynamic models in statistical mechanics
Trovato, M.; Reggiani, L.
2012-01-01
This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the
INNOVATIVE SYSTEM OF FIXED CAPITAL REPRODUCTION
G. S. Merzlikina
2015-01-01
Full Text Available The article presents the basic problems of fixed capital reproduction. There are considered a significant depreciation of fixed assets of Russian enterprises. There are presented arguments in favor of urgency of the problem of reproduction of fixed assets of the Russian Federation. The paper presents theoretical evidence base basic types of fixed capital reproduction. There are identified all possible sources of simple and expanded reproduction of capital. There are considered the role of value and feasibility of depreciation in the formation of Reserve reproduction. Suggested the formation of accounting and analytical management provision fixed capital, as well as an innovative system of fixed capital reproduction, which implies the creation of depreciation , capital, revaluation, liquidation reserves. The algorithm of business valuation based on an innovative system of capital reproduction. The algorithm and the possibility of formation of reserves are considered on a concrete example of one of the industrial enterprises of the city Volgograd. On the basis of the algorithm presented calculations of business valuation of the enterprise. Calculations have shown an increase in value of the business condition of the formation of special reserves, which underlines the necessary and urgency of their formation in accounting policy and economy organizations and enterprises of Russia as a whole.
Precise Point Positioning with Partial Ambiguity Fixing.
Li, Pan; Zhang, Xiaohong
2015-06-10
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.
Goldman, Emily A; Smith, Erik M; Richardson, Tammi L
2013-03-15
The utility of a multiple-fixed-wavelength spectral fluorometer, the Algae Online Analyser (AOA), as a means of quantifying chromophoric dissolved organic matter (CDOM) and phytoplankton photosynthetic activity was tested using algal cultures and natural communities from North Inlet estuary, South Carolina. Comparisons of AOA measurements of CDOM to those by spectrophotometry showed a significant linear relationship, but increasing amounts of background CDOM resulted in progressively higher over-estimates of chromophyte contributions to a simulated mixed algal community. Estimates of photosynthetic activity by the AOA at low irradiance (≈ 80 μmol quanta m(-2) s(-1)) agreed well with analogous values from the literature for the chlorophyte, Dunaliella tertiolecta, but were substantially lower than previous measurements of the maximum quantum efficiency of photosystem II (F(v)/F(m)) in Thalassiosira weissflogii (a diatom) and Rhodomonas salina (a cryptophyte). When cells were exposed to high irradiance (1500 μmol quanta m(-2) s(-1)), declines in photosynthetic activity with time measured by the AOA mirrored estimates of cellular fluorescence capacity using the herbicide 3'-(3, 4-dichlorophenyl)-1',1'-dimethyl urea (DCMU). The AOA shows promise as a tool for the continuous monitoring of phytoplankton community composition, CDOM, and the group-specific photosynthetic activity of aquatic ecosystems. Copyright © 2012 Elsevier Ltd. All rights reserved.
When Reasoning Modifies Memory: Schematic Assimilation Triggered by Analogical Mapping
Vendetti, Michael S.; Wu, Aaron; Rowshanshad, Ebi; Knowlton, Barbara J.; Holyoak, Keith J.
2014-01-01
Analogical mapping highlights shared relations that link 2 situations, potentially at the expense of information that does not fit the dominant pattern of correspondences. To investigate whether analogical mapping can alter subsequent recognition memory for features of a source analog, we performed 2 experiments with 4-term proportional analogies…
Analogies in Medicine: Valuable for Learning, Reasoning, Remembering and Naming
Pena, Gil Patrus; Andrade-Filho, Jose de Souza
2010-01-01
Analogies are important tools in human reasoning and learning, for resolving problems and providing arguments, and are extensively used in medicine. Analogy and similarity involve a structural alignment or mapping between domains. This cognitive mechanism can be used to make inferences and learn new abstractions. Through analogies, we try to…
Modern Communication: Exploring Physiological Transmission through Tech-Savvy Analogies
Hollabaugh, Christopher R.; Milanick, Mark A.
2014-01-01
Analogies are often helpful for students to grasp key physiological concepts; sometimes the technical jargon makes the concept seem more complex than it actually is. In this article the authors provide several analogies for information transfer processes that sometimes confuse students. For an analogy to be useful, of course, it needs to be…
The design analogy : a model for moral problem solving
Dorst, C.H.; Royakkers, L.M.M.
2006-01-01
In this paper we explore an analogy between design and ethics, first drawn by Whitbeck. We investigate her claim that such an analogy can help to understand moral problems and aid us in dealing with them by suggesting strategies for addressing moral problems. We explore the nature of analogies, and
Functional DNA: Teaching Infinite Series through Genetic Analogy
Kowalski, R. Travis
2011-01-01
This article presents an extended analogy that connects infinite sequences and series to the science of genetics, by identifying power series as "DNA for a function." This analogy allows standard topics such as convergence tests or Taylor approximations to be recast in a "forensic" light as mathematical analogs of genetic concepts such as DNA…
Circuit with a successive approximation analog to digital converter
Louwsma, S.M.; Vertregt, Maarten
2011-01-01
During successive approximation analog to digital conversion a series of successive digital reference values is selected that converges towards a digital representation of an analog input signal. An analog reference signal is generated dependent on the successive digital reference values and
Circuit with a successive approximation analog to digital converter
Louwsma, S.M.; Vertregt, Maarten
2010-01-01
During successive approximation analog to digital conversion a series of successive digital reference values is selected that converges towards a digital representation of an analog input signal. An analog reference signal is generated dependent on the successive digital reference values and
Exploring high-density baryonic matter: Maximum freeze-out density
Randrup, Joergen [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)
2016-08-15
The hadronic freeze-out line is calculated in terms of the net baryon density and the energy density instead of the usual T and μ{sub B}. This analysis makes it apparent that the freeze-out density exhibits a maximum as the collision energy is varied. This maximum freeze-out density has μ{sub B} = 400 - 500 MeV, which is above the critical value, and it is reached for a fixed-target bombarding energy of 20-30 GeV/N well within the parameters of the proposed NICA collider facility. (orig.)
Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175
Maximum gravitational redshift of white dwarfs
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
Analog/RF performance of four different Tunneling FETs with the recessed channels
Li, Wei; Liu, Hongxia; Wang, Shulong; Chen, Shupeng
2016-12-01
In this paper, the performance comparisons of analog and radio frequency (RF) in the four different tunneling field effect transistors (TFETs) with the recessed channels are performed. The L-shaped channel TFET (LTFET), U-shaped channel TFET (UTFET), U-shaped channel with L-shaped gate TFET (LGUTFET) and U-shaped channel with dual sources TFET (DUTFET) are investigated by using Silvaco-Atalas simulation tool. The transconductance (gm), output conductance (gds), gate capacitance (Cgg), cut-off frequency (fT) and gain bandwidth product (GBW) are the parameters by analyzed. Among all the considered devices, the DUTFET has the maximum gm and gds due to the improved on-state current by dual sources, and the LTFET has the minimum Cgg because of the minimum gate-to-drain capacitance (Cgd). Since analog/RF characteristics of a device are proportional to gm and inversely proportional to Cgg, the LTFET and DUTFET have better analog/RF performance compared to the UTFET and LGUTFET. The extracted largest fT is 3.02 GHz in the LTFET and the largest GBW is 1.02 GHz in the DUTFET. The simulation results in this paper can be used as a reference to choose the TFET among these four TFETs for analog/RF applications.
Gomes, Rafael Soares; Bergamo, Edmara Tatiely Pedroso; Bordin, Dimorvan; Del Bel Cury, Altair Antoninha
2017-01-01
The use of analogs could reduce the cost of mechanical tests involving implant-supported crowns, but it is unclear if it would negatively affect the data accuracy. This study evaluated the substitution of the implant by implants analogs or abutment analogs as a support for crowns in mechanical tests, taking into account stress distribution and fracture load of monolithic lithium disilicate crowns. Thirty lithium disilicate monolithic crowns were randomized into three groups according to the set: Implant + abutment (IA); implant analog + abutment (IAA); abutment analog (AA). The specimens were subjected to mechanical fatigue (10 6 cycles, 200 N, 2 Hz) and thermal fatigue (10 4 cycles, 5°–55 °C). A final compression load was applied and the maximum fracture load was recorded. Data were analyzed using one-way ANOVA (α = 0.05). The experiment was validated by finite element analysis and the maximum principal stress was recorded. No statistically significant difference was observed in the mean fracture load among groups (P > 0.05). The failure mode was similar for all groups with the origin of crack propagation located at the load point application. Finite element analysis showed similar stress distribution and stress peak values for all groups. The use of implant's or abutment's analog does not influence the fracture load and stress distribution for cemented implant-supported crowns. - Highlights: • A less costly methodology for evaluate implant-supported crowns is proposed. • It is suggested the substitution of the implant or abutment for their analogs. • The outcomes of fracture load are not influenced by these replacements.
Gomes, Rafael Soares; Bergamo, Edmara Tatiely Pedroso; Bordin, Dimorvan; Del Bel Cury, Altair Antoninha, E-mail: altair@unicamp.br
2017-06-01
The use of analogs could reduce the cost of mechanical tests involving implant-supported crowns, but it is unclear if it would negatively affect the data accuracy. This study evaluated the substitution of the implant by implants analogs or abutment analogs as a support for crowns in mechanical tests, taking into account stress distribution and fracture load of monolithic lithium disilicate crowns. Thirty lithium disilicate monolithic crowns were randomized into three groups according to the set: Implant + abutment (IA); implant analog + abutment (IAA); abutment analog (AA). The specimens were subjected to mechanical fatigue (10{sup 6} cycles, 200 N, 2 Hz) and thermal fatigue (10{sup 4} cycles, 5°–55 °C). A final compression load was applied and the maximum fracture load was recorded. Data were analyzed using one-way ANOVA (α = 0.05). The experiment was validated by finite element analysis and the maximum principal stress was recorded. No statistically significant difference was observed in the mean fracture load among groups (P > 0.05). The failure mode was similar for all groups with the origin of crack propagation located at the load point application. Finite element analysis showed similar stress distribution and stress peak values for all groups. The use of implant's or abutment's analog does not influence the fracture load and stress distribution for cemented implant-supported crowns. - Highlights: • A less costly methodology for evaluate implant-supported crowns is proposed. • It is suggested the substitution of the implant or abutment for their analogs. • The outcomes of fracture load are not influenced by these replacements.
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
SSERVI Analog Regolith Simulant Testbed Facility
Minafra, Joseph; Schmidt, Gregory; Bailey, Brad; Gibbs, Kristina
2016-10-01
The Solar System Exploration Research Virtual Institute (SSERVI) at NASA's Ames Research Center in California's Silicon Valley was founded in 2013 to act as a virtual institute that provides interdisciplinary research centered on the goals of its supporting directorates: NASA Science Mission Directorate (SMD) and the Human Exploration & Operations Mission Directorate (HEOMD).Primary research goals of the Institute revolve around the integration of science and exploration to gain knowledge required for the future of human space exploration beyond low Earth orbit. SSERVI intends to leverage existing JSC1A regolith simulant resources into the creation of a regolith simulant testbed facility. The purpose of this testbed concept is to provide the planetary exploration community with a readily available capability to test hardware and conduct research in a large simulant environment.SSERVI's goals include supporting planetary researchers within NASA, other government agencies; private sector and hardware developers; competitors in focused prize design competitions; and academic sector researchers.SSERVI provides opportunities for research scientists and engineers to study the effects of regolith analog testbed research in the planetary exploration field. This capability is essential to help to understand the basic effects of continued long-term exposure to a simulated analog test environment.The current facility houses approximately eight tons of JSC-1A lunar regolith simulant in a test bin consisting of a 4 meter by 4 meter area, including dust mitigation and safety oversight.Facility hardware and environment testing scenarios could include, Lunar surface mobility, Dust exposure and mitigation, Regolith handling and excavation, Solar-like illumination, Lunar surface compaction profile, Lofted dust, Mechanical properties of lunar regolith, Surface features (i.e. grades and rocks)Numerous benefits vary from easy access to a controlled analog regolith simulant testbed, and
Image magnification based on similarity analogy
Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua
2009-01-01
Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.
Probleme bei der Digitalisierung analoger Messwerte
Plaßmann, Wilfried
Messwerte liegen häufig in analoger Form als Spannungswerte vor. Sie werden in eine digital kodierte Form umgesetzt, wenn eine (nahezu) fehlerfreie Übertragung erforderlich ist, wenn Signalverläufe gespeichert werden sollen, wenn eine Weiterverarbeitung erfolgen soll oder wenn Messungen mit sehr geringem Messfehler notwendig sind. Hier soll auf einige Probleme, die durch die Umsetzung entstehen, aus messtechnischer Sicht eingegangen werden. Stichworte: Fehler bei der Digitalisierung; Signal-Quantisierungsgeräusch-Abstand; Verbesserung des Signal-Rausch-Verhältnisses; Abtast-Halte-Glied; Aliasing; Erfassung von Momentanwerten.
Bosonic analog of the Klein paradox
Wagner, R. E.; Ware, M. R.; Su, Q.; Grobe, R.
2010-01-01
The standard Klein paradox describes how an incoming electron scatters off a supercritical electrostatic barrier that is so strong that it can generate electron-positron pairs. This fermionic system has been widely discussed in textbooks to illustrate some of the discrepancies between quantum mechanical and quantum field theoretical descriptions for the pair creation process. We compare the fermionic dynamics with that of the corresponding bosonic system. We point out that the direct counterpart of the Pauli exclusion principle (the central mechanism to resolve the fermionic Klein paradox) is stimulated emission, which leads to the resolution of the analogous bosonic paradox.
Novel Gemini vitamin D3 analogs
Okamoto, Ryoko; Gery, Sigal; Kuwayama, Yoshio
2014-01-01
anticancer potency, but similar toxicity causing hypercalcemia. We focused on the effect of these compounds on the stimulation of expression of human cathelicidin antimicrobial peptide (CAMP) whose gene has a vitamin D response element in its promoter. Expression of CAMP mRNA and protein increased in a dose......-response fashion after exposure of acute myeloid leukemia (AML) cells to the Gemini analog, BXL-01-126, in vitro. A xenograft model of AML was developed using U937 AML cells injected into NSG-immunodeficient mice. Administration of vitamin D3 compounds to these mice resulted in substantial levels of CAMP...
Development of analog watch with minute repeater
Okigami, Tomio; Aoyama, Shigeru; Osa, Takashi; Igarashi, Kiyotaka; Ikegami, Tomomi
A complementary metal oxide semiconductor with large scale integration was developed for an electronic minute repeater. It is equipped with the synthetic struck sound circuit to generate natural struck sound necessary for the minute repeater. This circuit consists of an envelope curve drawing circuit, frequency mixer, polyphonic mixer, and booster circuit made by using analog circuit technology. This large scale integration is a single chip microcomputer with motor drivers and input ports in addition to the synthetic struck sound circuit, and it is possible to make an electronic system of minute repeater at a very low cost in comparison with the conventional type.
Analog to digital conversion for nuclear spectrometry
Carvalho, P.V.R. de.
1982-04-01
A study of the analog to digital conversion techniques for nuclear spectrometry is presented and the main design philosophies of nuclear ADC's are compared. Among them, the most suitable for the current Brazilian conditions, concerning the specifications and components avaiability is the one that employs a statistical correction of successive approximation converters. This technique is described in full detail. A prototype has been developed an tested for the practical demonstration of the theoretical conclusions. These tests was carried on nuclear spectrometry data aquisition system whose implementation is also described. (Author) [pt
The computation of fixed points and applications
Todd, Michael J
1976-01-01
Fixed-point algorithms have diverse applications in economics, optimization, game theory and the numerical solution of boundary-value problems. Since Scarf's pioneering work [56,57] on obtaining approximate fixed points of continuous mappings, a great deal of research has been done in extending the applicability and improving the efficiency of fixed-point methods. Much of this work is available only in research papers, although Scarf's book [58] gives a remarkably clear exposition of the power of fixed-point methods. However, the algorithms described by Scarf have been super~eded by the more sophisticated restart and homotopy techniques of Merrill [~8,~9] and Eaves and Saigal [1~,16]. To understand the more efficient algorithms one must become familiar with the notions of triangulation and simplicial approxi- tion, whereas Scarf stresses the concept of primitive set. These notes are intended to introduce to a wider audience the most recent fixed-point methods and their applications. Our approach is therefore ...
Fixed point theory in metric type spaces
Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco
2015-01-01
Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...
Ecological consequences of the expansion of N2-fixing plants in cold biomes
Hiltbrunner, Erika; Aerts, Rien; Bühlmann, Tobias; Huss-Danell, Kerstin; Magnusson, Borgthor; Myrold, David D.; Reed, Sasha C.; Sigurdsson, Bjarni D.; Körner, Christian
2014-01-01
Research in warm-climate biomes has shown that invasion by symbiotic dinitrogen (N2)-fixing plants can transform ecosystems in ways analogous to the transformations observed as a consequence of anthropogenic, atmospheric nitrogen (N) deposition: declines in biodiversity, soil acidification, and alterations to carbon and nutrient cycling, including increased N losses through nitrate leaching and emissions of the powerful greenhouse gas nitrous oxide (N2O). Here, we used literature review and case study approaches to assess the evidence for similar transformations in cold-climate ecosystems of the boreal, subarctic and upper montane-temperate life zones. Our assessment focuses on the plant genera Lupinus and Alnus, which have become invasive largely as a consequence of deliberate introductions and/or reduced land management. These cold biomes are commonly located in remote areas with low anthropogenic N inputs, and the environmental impacts of N2-fixer invasion appear to be as severe as those from anthropogenic N deposition in highly N polluted areas. Hence, inputs of N from N2 fixation can affect ecosystems as dramatically or even more strongly than N inputs from atmospheric deposition, and biomes in cold climates represent no exception with regard to the risk of being invaded by N2-fixing species. In particular, the cold biomes studied here show both a strong potential to be transformed by N2-fixing plants and a rapid subsequent saturation in the ecosystem’s capacity to retain N. Therefore, analogous to increases in N deposition, N2-fixing plant invasions must be deemed significant threats to biodiversity and to environmental quality.
A bilateral frontoparietal network underlies visuospatial analogical reasoning.
Watson, Christine E; Chatterjee, Anjan
2012-02-01
Our ability to reason by analogy facilitates problem solving and allows us to communicate ideas efficiently. In this study, we examined the neural correlates of analogical reasoning and, more specifically, the contribution of rostrolateral prefrontal cortex (RLPFC) to reasoning. This area of the brain has been hypothesized to integrate relational information, as in analogy, or the outcomes of subgoals, as in multi-tasking and complex problem solving. Using fMRI, we compared visuospatial analogical reasoning to a control task that was as complex and difficult as the analogies and required the coordination of subgoals but not the integration of relations. We found that analogical reasoning more strongly activated bilateral RLPFC, suggesting that anterior prefrontal cortex is preferentially recruited by the integration of relational knowledge. Consistent with the need for inhibition during analogy, bilateral, and particularly right, inferior frontal gyri were also more active during analogy. Finally, greater activity in bilateral inferior parietal cortex during the analogy task is consistent with recent evidence for the neural basis of spatial relation knowledge. Together, these findings indicate that a network of frontoparietal areas underlies analogical reasoning; we also suggest that hemispheric differences may emerge depending on the visuospatial or verbal/semantic nature of the analogies. Copyright © 2011 Elsevier Inc. All rights reserved.
Elucidating the neurotoxic effects of MDMA and its analogs.
Karuppagounder, Senthilkumar S; Bhattacharya, Dwipayan; Ahuja, Manuj; Suppiramaniam, Vishnu; Deruiter, Jack; Clark, Randall; Dhanasekaran, Muralikrishnan
2014-04-17
There is a rapid increase in the use of methylenedioxymethamphetamine (MDMA) and its structural congeners/analogs globally. MDMA and MDMA-analogs have been synthesized illegally in furtive dwellings and are abused due to its addictive potential. Furthermore, MDMA and MDMA-analogs have shown to have induced several adverse effects. Hence, understanding the mechanisms mediating this neurotoxic insult of MDMA-analogs is of immense importance for the public health in the world. We synthesized and investigated the neurotoxic effects of MDMA and its analogs [4-methylenedioxyamphetamine (MDA), 2, 6-methylenedioxyamphetamine (MDMA), and N-ethyl-3, 4-methylenedioxyamphetamine (MDEA)]. The stimulatory or the dopaminergic agonist effects of MDMA and MDMA-analogs were elucidated using the established 6-hydroxydopamine lesioned animal model. Additionally, we also investigated the neurotoxic mechanisms of MDMA and MDMA-analogs on mitochondrial complex-I activity and reactive oxygen species generation. MDMA and MDMA-analogs exhibited stimulatory activity as compared to amphetamines and also induced several behavioral changes in the rodents. MDMA and MDMA-analogs enhanced the reactive oxygen generation and inhibited mitochondrial complex-I activity which can lead to neurodegeneration. Hence the mechanism of neurotoxicity, MDMA and MDMA-analogs can enhance the release of monoamines, alter the monoaminergic neurotransmission, and augment oxidative stress and mitochondrial abnormalities leading to neurotoxicity. Thus, our study will help in developing effective pharmacological and therapeutic approaches for the treatment of MDMA and MDMA-analog abuse. Copyright © 2014 Elsevier Inc. All rights reserved.
Goursky, V.; Thenes, P.
1969-01-01
This multipurpose unit is designed to accomplish one of the following functions: - gated window amplifier, - Analog memory and - Amplitude-to-time converter. The first function is mainly devoted to improve the poor resolution of pulse-height analyzers with a small number of channels. The analog memory, a new function in the standard range of plug-in modules, is capable of performing a number of operations: 1) fixed delay, or variable delay dependent on an external parameter (application to the analog processing of non-coincident pulses), 2) de-randomiser to increase the efficiency of the pulse height analysis in a spectrometry experiment, 3) linear multiplexer to allow an analyser to serve as many spectrometry devices as memory elements that it possesses. Associated with a coding scaler, this unit, if used as a amplitude-to-time converter, constitutes a Wilkinson A.D.C with a capability of 10 bits (or more) and with a 100 MHz clock frequency. (authors) [fr
Design choices and issues in fixed-target B experiments
Camilleri, L.
1993-01-01
The main priority of any experiment on B physics in the years to come will be an endeavour to observe CP violation in the B sector. Such measurements imply the following requirements of the experiment. Trigger: a muon trigger will be sensitive to J/ψ reactions and muon tags; an electron trigger will double the number of lepton events; in order to include kaon tags and self-tagging reactions, the experiment must not rely entirely on lepton triggers. Secondary Vertex triggers and hadron p T triggers should be included in order to have the maximum flexibility. Detector: vertex detector; particle identification; good momentum resolution; electromagnetic and hadronic calorimeters; muon detector. In addition the following issues have to be addressed: Collider or fixed-target mode? If fixed target, extracted beam or internal target? If internal target, gas jet or wire target? If a gas jet, hydrogen or a heavy gas? Beam pipe design. Silicon microvertex design and radiation damage. K s 0 decay path. Particle identification. Momentum resolution. Order of detectors. No single method stands out as the open-quotes obvious one.close quotes An extracted beam yields better vertex resolution and an internal target easier triggering. A flexible and diverse triggering scheme is of prime importance in order to be sensitive to as many reactions as possible, the experiment should not be limited to lepton triggers only. Proposed experiments (P867, HERA B) at existing machines will be invaluable for testing new devices and strategies for the LHC and SSC experiments
Speed control at low wind speeds for a variable speed fixed pitch wind turbine
Rosmin, N.; Watson, S.J.; Tompson, M. [Loughborough Univ., Loughborough, Leicestershire (United Kingdom)
2010-03-09
The maximum power regulation below rated wind speed is regulated by changing the rotor/generator speed at large frequency range in a fixed pitch, variable speed, stall-regulated wind turbine. In order to capture the power at a maximum value the power coefficient is kept at maximum peak point by maintaining the tip speed ratio at its optimum value. The wind industry is moving from stall regulated fixed speed wind turbines to newer improved innovative versions with better reliability. While a stall regulated fixed pitch wind turbine is among the most cost-effective wind turbine on the market, its problems include noise, severe vibrations, high thrust loads and low power efficiency. Therefore, in order to improve such drawbacks, the rotation of the generator speed is made flexible where the rotation can be controlled in variable speed. This paper discussed the development of a simulation model which represented the behaviour of a stall regulated variable speed wind turbine at low wind speed control region by using the closed loop scalar control with adjustable speed drive. The paper provided a description of each sub-model in the wind turbine system and described the scalar control of the induction machine. It was concluded that by using a constant voltage/frequency ratio of the generator's stator side control, the generator speed could be regulated and the generator torque could be controlled to ensure the power coefficient could be maintained close to its maximum value. 38 refs., 1 tab., 10 figs.
Fracture analysis of randomized implant-supported fixed dental prostheses
Esquivel-Upshaw, Josephine F.; Mehler, Alex; Clark, Arthur E.; Neal, Dan; Anusavice, Kenneth J.
2014-01-01
Objective Fractures of posterior fixed dental all-ceramic prostheses can be caused by one or more factors including prosthesis design, flaw distribution, direction and magnitude of occlusal loading, and nature of supporting infrastructure (tooth root/implant), and presence of adjacent teeth. This clinical study of implant-supported, all-ceramic fixed dental prostheses, determined the effects of (1) presence of a tooth distal to the most distal retainer; (2) prosthesis loading either along the non-load bearing or load bearing areas; (3) presence of excursive contacts or maximum intercuspation contacts in the prosthesis; and (4) magnitude of bite force on the occurrence of veneer ceramic fracture. Methods 89 implant-supported FDPs were randomized as either a three-unit posterior metal-ceramic (Au-Pd-Ag alloy and InLine POM, Ivoclar, Vivadent) FDP or a ceramic-ceramic (ZirCAD and ZirPress, Ivoclar, Vivadent) FDP. Two implants (Osseospeed, Dentsply) and custom abutments (Atlantis, Dentsply) supported these FDPs, which were cemented with resin cement (RelyX Universal Cement). Baseline photographs were made with markings of teeth from maximum intercuspation (MI) and excursive function. Patients were recalled at 6 months and 1 to 3 years. Fractures were observed, their locations recorded, and images compared with baseline photographs of occlusal contacts. Conclusion No significant relationship exists between the occurrence of fracture and: (1) the magnitude of bite force; (2) a tooth distal to the most distal retainer; and (3) contacts in load-bearing or non-load-bearing areas. However, there was a significantly higher likelihood of fracture in areas with MI contacts only. Clinical Significance This clinical study demonstrates that there is a need to evaluate occlusion differently with implant-supported prostheses than with natural tooth supported prostheses because of the absence of a periodontal ligament. Implant supported prostheses should have minimal occlusion and
Nonsymmetric entropy and maximum nonsymmetric entropy principle
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
The maximum-entropy method in superspace
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
A fixed target facility at the SSC
Loken, S.; Morfin, J.G.
1984-01-01
The question of whether a facility for fixed target physics should be provided at the SSC must be answered before the final technical design of the SSC can be completed, particularly if the eventual form of extraction would influence the magnet design. To this end, an enthusiastic group of experimentalists, theoreticians and accelerator specialists have studied this point. The accelerator physics issues were addressed by a group whose report is contained in these proceedings. The physics addressable by fixed target was considered by many of the Physics area working groups and in particular by the Structure Function Group. This report is the summary of the working group which considered various SSC fixed target experiments and determined which types of beams and detectors would be required
Fixed point algebras for easy quantum groups
Gabriel, Olivier; Weber, Moritz
2016-01-01
Compact matrix quantum groups act naturally on Cuntz algebras. The first author isolated certain conditions under which the fixed point algebras under this action are Kirchberg algebras. Hence they are completely determined by their K-groups. Building on prior work by the second author,we prove...... that free easy quantum groups satisfy these conditions and we compute the K-groups of their fixed point algebras in a general form. We then turn to examples such as the quantum permutation group S+ n,the free orthogonal quantum group O+ n and the quantum reflection groups Hs+ n. Our fixed point......-algebra construction provides concrete examples of free actions of free orthogonal easy quantum groups,which are related to Hopf-Galois extensions....
Fixed target facility at the SSC
Loken, S.C.; Morfin, J.G.
1985-01-01
The question of whether a facility for fixed target physics should be provided at the SSC must be answered before the final technical design of the SSC can be completed, particularly if the eventual form of extraction would influence the magnet design. To this end, an enthusiastic group of experimentalists, theoreticians and accelerator specialists have studied this point. The accelerator physics issues were addressed by a group led by E. Colton whose report is contained in these proceedings. The physics addressable by fixed target was considered by many of the Physics area working groups and in particular by the Structure Function Group. This report is the summary of the working group which considered various SSC fixed target experiments and determined which types of beams and detectors would be required. 13 references, 5 figures.
Fixed-site physical protection system modeling
Chapman, L.D.
1975-01-01
An evaluation of a fixed-site safeguard security system must consider the interrelationships of barriers, alarms, on-site and off-site guards, and their effectiveness against a forcible adversary attack whose intention is to create an act of sabotage or theft. A computer model has been developed at Sandia Laboratories for the evaluation of alternative fixed-site security systems. Trade-offs involving on-site and off-site response forces and response times, perimeter alarm systems, barrier configurations, and varying levels of threat can be analyzed. The computer model provides a framework for performing inexpensive experiments on fixed-site security systems for testing alternative decisions, and for determining the relative cost effectiveness associated with these decision policies
Fixed target facility at the SSC
Loken, S.C.; Morfin, J.G.
1985-01-01
The question of whether a facility for fixed target physics should be provided at the SSC must be answered before the final technical design of the SSC can be completed, particularly if the eventual form of extraction would influence the magnet design. To this end, an enthusiastic group of experimentalists, theoreticians and accelerator specialists have studied this point. The accelerator physics issues were addressed by a group led by E. Colton whose report is contained in these proceedings. The physics addressable by fixed target was considered by many of the Physics area working groups and in particular by the Structure Function Group. This report is the summary of the working group which considered various SSC fixed target experiments and determined which types of beams and detectors would be required. 13 references, 5 figures
How the creative use of analogies can shape medical practice.
Prasad, G V Ramesh
2015-06-01
Analogical reasoning is central to medical progress, and is either creative or conservative. According to Hofmann et al., conservative analogy relates concepts from old technology to new technologies with emphasis on preservation of comprehension and conduct. Creative analogy however brings new understanding to new technology, brings similarities existing in the source domain to a target domain where they previously had no bearing, and imports something entirely different from the content of the analogy itself. I defend the claim that while conservative analogies are useful by virtue of being comfortable to use from familiarity and experience, and are more easily accepted by society, they only lead to incremental advances in medicine. However, creative analogies are more exciting and productive because they generate previously unexpected associations across widely separated domains, emphasize relations over physical similarities, and structure over superficiality. I use kidney transplantation and anti-rejection medication development as an exemplar of analogical reasoning used to improve medical practice. Anti-rejection medication has not helped highly sensitized patients because of their propensity to rejecting most organs. I outline how conservative analogical reasoning led to anti-rejection medication development, but creative analogical reasoning helped highly sensitized and blood type incompatible patients through domino transplants, by which they obtain a kidney to which they are not sensitized. Creative analogical reasoning is more likely than conservative analogical reasoning to lead to revolutionary progress. While these analogies overlap and creative analogies eventually become conservative, progress is best facilitated by combining conservative and creative analogical reasoning. © 2015 John Wiley & Sons, Ltd.
Discrete stochastic analogs of Erlang epidemic models.
Getz, Wayne M; Dougherty, Eric R
2018-12-01
Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.
Endpoint distinctiveness facilitates analogical mapping in pigeons.
Hagmann, Carl Erick; Cook, Robert G
2015-03-01
Analogical thinking necessitates mapping shared relations across two separate domains. We investigated whether pigeons could learn faster with ordinal mapping of relations across two physical dimensions (circle size & choice spatial position) relative to random mapping of these relations. Pigeons were trained to relate six circular samples of different sizes to horizontally positioned choice locations in a six alternative matching-to-sample task. Three pigeons were trained in a mapped condition in which circle size mapped directly onto choice spatial position. Three other pigeons were trained in a random condition in which the relations between size and choice position were arbitrarily assigned. The mapped group showed an advantage over the random group in acquiring this task. In a subsequent second phase, relations between the dimensions were ordinally reversed for the mapped group and re-randomized for the random group. There was no difference in how quickly matching accuracy re-emerged in the two groups, although the mapped group eventually performed more accurately. Analyses suggested this mapped advantage was likely due to endpoint distinctiveness and the benefits of proximity errors during choice responding rather than a conceptual or relational advantage attributable to the common or ordinal mapping of the two dimensions. This potential difficulty in mapping relations across dimensions may limit the pigeons' capacity for more advanced types of analogical reasoning. This article is part of a Special Issue entitled: Tribute to Tom Zentall. Copyright © 2014 Elsevier B.V. All rights reserved.
Endpoint Distinctiveness Facilitates Analogical Mapping in Pigeons
Hagmann, Carl Erick; Cook, Robert G.
2015-01-01
Analogical thinking necessitates mapping shared relations across two separate domains. We investigated whether pigeons could learn faster with ordinal mapping of relations across two physical dimensions (circle size & choice spatial position) relative to random mapping of these relations. Pigeons were trained to relate six circular samples of different sizes to horizontally positioned choice locations in a six alternative matching-to-sample task. Three pigeons were trained in a mapped condition in which circle size mapped directly onto choice spatial position. Three other pigeons were trained in a random condition in which the relations between size and choice position were arbitrarily assigned. The mapped group showed an advantage over the random group in acquiring this task. In a subsequent second phase, reassignment, relations between the dimensions were ordinally reversed for the mapped group and re-randomized for the random group. There was no difference in how quickly matching accuracy re-emerged in the two groups, although the mapped group eventually performed more accurately. Analyses suggested this mapped advantage was likely due endpoint distinctiveness and the benefits of proximity errors during choice responding rather than a conceptual or relational advantage attributable to the common or ordinal map of the two dimensions. This potential difficulty in mapping relations across dimensions may limit the pigeons’ capacity for more advanced types of analogical reasoning. PMID:25447511
Merging Galaxy Clusters: Analysis of Simulated Analogs
Nguyen, Jayke; Wittman, David; Cornell, Hunter
2018-01-01
The nature of dark matter can be better constrained by observing merging galaxy clusters. However, uncertainty in the viewing angle leads to uncertainty in dynamical quantities such as 3-d velocities, 3-d separations, and time since pericenter. The classic timing argument links these quantities via equations of motion, but neglects effects of nonzero impact parameter (i.e. it assumes velocities are parallel to the separation vector), dynamical friction, substructure, and larger-scale environment. We present a new approach using n-body cosmological simulations that naturally incorporate these effects. By uniformly sampling viewing angles about simulated cluster analogs, we see projected merger parameters in the many possible configurations of a given cluster. We select comparable simulated analogs and evaluate the likelihood of particular merger parameters as a function of viewing angle. We present viewing angle constraints for a sample of observed mergers including the Bullet cluster and El Gordo, and show that the separation vectors are closer to the plane of the sky than previously reported.
The gravitational analog of Faraday's induction law
Zile, Daniel; Overduin, James
2015-04-01
Michael Faraday, the discoverer of electromagnetic induction, was convinced that there must also be a gravitational analog of this law, and he carried out drop-tower experiments in 1849 to look for the electric current induced in a coil by changes in gravitational flux through the coil. This work, now little remembered, was in some ways the first investigation of what we would now call a unified-field theory. We revisit Faraday's experiments in the light of current knowledge and ask what might be learned if they were to be performed today. We then review the gravitational analog for Faraday's law that arises within the vector (or gravito-electromagnetic) approximation to Einstein's theory of general relativity in the weak-field, low-velocity limit. This law relates spinning masses and induced ``mass currents'' rather than spinning charges and electric currents, but is otherwise remarkably similar to its electromagnetic counterpart. The predicted effects are completely unobservable in everyday settings like those envisioned by Faraday, but are thought to be relevant in astrophysical contexts like the accretion disks around collapsed stars, thus bearing out Faraday's remarkable intuition. Undergraduate student.
The danger of fixed drug combinations.
Herxheimer, H
1975-07-01
After the second world war a number of pharmaceutical firms which were not able to create new therapeutic substances by their own research, put a great number of fixed drug combinations on the market. Their number quickly increased, as the efficiency of these compounds required no legal proof and as, with appropriate propaganda, large profits could be earned. The number of firms doing this sort of production also increased, and in West Germany, for instance, more than 3/4 of all drugs on the official list are now fixed combinations. Our task is, therefore, to ask for regulations which limit fixed combinations to such preparation the efficiency of which has been shown and whose advantages more than outweigh their disadvantages. The advantages of these preparations are convenience to the patient, avoidance of potential mistakes made possible by too many drugs given on the same day and, perhaps, lower prices. The disadvantages are: 1. The individual optimum dose for a patient cannot be achieved, because in case of a change of dosis all components are changed. 2. Different components may have different duration of action. 3. Different components may have a different bioavailability. 4. Different components may interact. 5. Some components may create tolerance, others not. In many cases fixed combinations have been used to make drugs with poor efficiency financially viable by combining them with very efficient drugs. The existence of thousands of fixed combinations makes the drug market indiscernible and useless. They obscure the relatively few essential drugs and make it difficult for the doctor to find his way amongst the mass of offered medicaments. Few fixed combinations are justifiable. These are well known and they should be permitted as before. All others should be banned until it has been shown that their advantages are greater than their disadvantages.
A novel algorithm for single-axis maximum power generation sun trackers
Lee, Kung-Yen; Chung, Chi-Yao; Huang, Bin-Juine; Kuo, Ting-Jung; Yang, Huang-Wei; Cheng, Hung-Yen; Hsu, Po-Chien; Li, Kang
2017-01-01
Highlights: • A novel algorithm for a single-axis sun tracker is developed to increase the efficiency. • Photovoltaic module is rotated to find the optimal angle for generating the maximum power. • Electric energy increases up to 8.3%, compared with that of the tracker with three fixed angles. • The rotation range is optimized to reduce energy consumption from the rotation operations. - Abstract: The purpose of this study is to develop a novel algorithm for a single-axis maximum power generation sun tracker in order to identify the optimal stopping angle for generating the maximum amount of daily electric energy. First, the photovoltaic modules of the single-axis maximum power generation sun tracker are automatically rotated from 50° east to 50° west. During the rotation, the instantaneous power generated at different angles is recorded and compared, meaning that the optimal angle for generating the maximum power can be determined. Once the rotation (detection) is completed, the photovoltaic modules are then rotated to the resulting angle for generating the maximum power. The photovoltaic module is rotated once per hour in an attempt to detect the maximum irradiation and overcome the impact of environmental effects such as shading from cloud cover, other photovoltaic modules and surrounding buildings. Furthermore, the detection range is halved so as to reduce the energy consumption from the rotation operations and to improve the reliability of the sun tracker. The results indicate that electric energy production is increased by 3.4% in spring and autumn, 5.4% in summer, and 8.3% in winter, compared with that of the same sun tracker with three fixed angles of 50° east in the morning, 0° at noon and 50° west in the afternoon.
Gauge-fixing ambiguity and monopole number
Hioki, S.; Miyamura, O.
1991-01-01
Gauge-fixing ambiguities of lattice SU(2) QCD are studied in the maximally abelian and unitary gauges. In the former, we find local maxima of a gauge-fixing function which may correspond to Gribov copies. There is a definite anti-correlation between the number of monopoles and the value of the function. Errors of measured quantities coming from the ambiguity are found to be less than inherent dispersion in the ensemble average. No ambiguity is found in the unitary gauges. (orig.)
Duan's fixed point theorem: Proof and generalization
Arkowitz Martin
2006-01-01
Full Text Available Let be an H-space of the homotopy type of a connected, finite CW-complex, any map and the th power map. Duan proved that has a fixed point if . We give a new, short and elementary proof of this. We then use rational homotopy to generalize to spaces whose rational cohomology is the tensor product of an exterior algebra on odd dimensional generators with the tensor product of truncated polynomial algebras on even dimensional generators. The role of the power map is played by a -structure as defined by Hemmi-Morisugi-Ooshima. The conclusion is that and each has a fixed point.
Fixed mass and scaling sum rules
Ward, B.F.L.
1975-01-01
Using the correspondence principle (continuity in dynamics), the approach of Keppell-Jones-Ward-Taha to fixed mass and scaling current algebraic sum rules is extended so as to consider explicitly the contributions of all classes of intermediate states. A natural, generalized formulation of the truncation ideas of Cornwall, Corrigan, and Norton is introduced as a by-product of this extension. The formalism is illustrated in the familiar case of the spin independent Schwinger term sum rule. New sum rules are derived which relate the Regge residue functions of the respective structure functions to their fixed hadronic mass limits for q 2 → infinity. (Auth.)
Jens Pedersen
2012-12-01
Full Text Available Fix & Finish “WHO CAN FIX IT?” is an investigation of the needles left behind by drug users in Copenhagen’s Vesterbro district. Based on the praxiological methods developed by Annemarie Mol as well as processes of objectification as described by Daniel Miller, the used needle appears to be a multiple object that is related to opportunities, fear, good intentions and trash. This article is an invitation to study material culture and material practices as a part of semiotic and discursive analyses in order to sharpen a researcher’s analytical focus while remaining grounded in reality.
Compound Nucleus Reactions in LENR, Analogy to Uranium Fission
Hora, Heinrich; Miley, George; Philberth, Karl
2008-03-01
The discovery of nuclear fission by Hahn and Strassmann was based on a very rare microanalytical result that could not initially indicate the very complicated details of this most important process. A similarity is discussed for the low energy nuclear reactions (LENRs) with analogies to the yield structure found in measurements of uranium fission. The LENR product distribution measured earlier in a reproducible way in experiments with thin film electrodes and a high density deuteron concentration in palladium has several striking similarities with the uranium fission fragment yield curve.ootnotetextG.H. Miley and J.A. Patterson, J. New Energy 1, 11 (1996); G.H. Miley et al, Proc ICCF6, p. 629 (1997).This comparison is specifically focussed to the Maruhn-Greiner local maximum of the distribution within the large-scale minimum when the fission nuclei are excited. Implications for uranium fission are discussed in comparison with LENR relative to the identification of fission a hypothetical compound nuclear reaction via a element ^306X126 with double magic numbers.
FIXING SOCIAL NEEDS THROUGH INTEGRATED SYSTEMS
CARINA-ELENA STEGĂROIU
2012-03-01
Full Text Available Research in addressing issues involving the development of new methods of reasoning such as reasoning analogical, reasoning based on probability theory and decision theory or on reasoning from case studies. Modeling the real world closely follows the line of development possibilities offered by computer programming. From unstructured programming and to programming objectual knowledge offered each time new methods of interpretation and modeling world.
Awan, M.M.A.; Awan, F.G.
2017-01-01
Extraction of maximum power from PV (Photovoltaic) cell is necessary to make the PV system efficient. Maximum power can be achieved by operating the system at MPP (Maximum Power Point) (taking the operating point of PV panel to MPP) and for this purpose MPPT (Maximum Power Point Trackers) are used. There are many tracking algorithms/methods used by these trackers which includes incremental conductance, constant voltage method, constant current method, short circuit current method, PAO (Perturb and Observe) method, and open circuit voltage method but PAO is the mostly used algorithm because it is simple and easy to implement. PAO algorithm has some drawbacks, one is low tracking speed under rapid changing weather conditions and second is oscillations of PV systems operating point around MPP. Little improvement is achieved in past papers regarding these issues. In this paper, a new method named 'Decrease and Fix' method is successfully introduced as improvement in PAO algorithm to overcome these issues of tracking speed and oscillations. Decrease and fix method is the first successful attempt with PAO algorithm for stability achievement and speeding up of tracking process in photovoltaic system. Complete standalone photovoltaic system's model with improved perturb and observe algorithm is simulated in MATLAB Simulink. (author)
Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui
2016-01-01
A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Synthetic Biology: A Unifying View and Review Using Analog Circuits.
Teo, Jonathan J Y; Woo, Sung Sik; Sarpeshkar, Rahul
2015-08-01
We review the field of synthetic biology from an analog circuits and analog computation perspective, focusing on circuits that have been built in living cells. This perspective is well suited to pictorially, symbolically, and quantitatively representing the nonlinear, dynamic, and stochastic (noisy) ordinary and partial differential equations that rigorously describe the molecular circuits of synthetic biology. This perspective enables us to construct a canonical analog circuit schematic that helps unify and review the operation of many fundamental circuits that have been built in synthetic biology at the DNA, RNA, protein, and small-molecule levels over nearly two decades. We review 17 circuits in the literature as particular examples of feedforward and feedback analog circuits that arise from special topological cases of the canonical analog circuit schematic. Digital circuit operation of these circuits represents a special case of saturated analog circuit behavior and is automatically incorporated as well. Many issues that have prevented synthetic biology from scaling are naturally represented in analog circuit schematics. Furthermore, the deep similarity between the Boltzmann thermodynamic equations that describe noisy electronic current flow in subthreshold transistors and noisy molecular flux in biochemical reactions has helped map analog circuit motifs in electronics to analog circuit motifs in cells and vice versa via a `cytomorphic' approach. Thus, a body of knowledge in analog electronic circuit design, analysis, simulation, and implementation may also be useful in the robust and efficient design of molecular circuits in synthetic biology, helping it to scale to more complex circuits in the future.
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
ASIC For Complex Fixed-Point Arithmetic
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Empirical Studies on Sovereign Fixed Income Markets
J.G. Duyvesteyn (Johan)
2015-01-01
markdownabstractAbstract This dissertation presents evidence of five studies showing that sovereign fixed income markets are not always price efficient. The emerging local currency debt market has grown to a large size of more than 1.5 trill ion US Dollars at the end of 2012. The factors
[Resin-bonded fixed partial dentures
Kreulen, C.M.; Creugers, N.H.J.
2013-01-01
A resin-bonded fixed partial denture is a prosthetic construction which can replace I or several teeth in an occlusal system and which comprises a pontic element which is adhesively attached to 1 or more abutment teeth. To compensate for the limited shear strength of the adhesive layer, the Jixed
Physics landscape-fixed target energies
Berger, E.L.
1989-10-01
An introductory review is presented of physics issues and opportunities at Fermilab fixed-target energies. Included are discussions of precision electroweak studies; deep inelastic lepton scattering; heavy quark production, spectroscopy, and decays; perturbative QCD; prompt photon production; massive lepton production; and spin dependence. 79 refs., 7 figs
Common Fixed Points for Weakly Compatible Maps
The purpose of this paper is to prove a common fixed point theorem, from the class of compatible continuous maps to a larger class of maps having weakly compatible maps without appeal to continuity, which generalized the results of Jungck [3], Fisher [1], Kang and Kim [8], Jachymski [2], and Rhoades [9].
Raman imaging using fixed bandpass filter
Landström, L.; Kullander, F.; Lundén, H.; Wästerby, P.
2017-05-01
By using fixed narrow band pass optical filtering and scanning the laser excitation wavelength, hyperspectral Raman imaging could be achieved. Experimental, proof-of-principle results from the Chemical Warfare Agent (CWA) tabun (GA) as well as the common CWA simulant tributyl phosphate (TBP) on different surfaces/substrates are presented and discussed.
Fixed expressions and the production of idioms
Sprenger, S.A.
2003-01-01
This PhD-thesis explores the mental representations of Fixed Expressions (FEs). Chapter 1 gives an introduction to the field of FEs and provides an overview of Chapters 2-5. In Chapter 2, research on the frequency of Dutch FEs is reported. The results suggest that about 7% of written Dutch language
Management strategy 3: fixed rate fertilizer applications
Previous chapters outlined management strategies for pond fertilization that take into account specific individual pond nutrient needs. Those methods would most likely be more ecologically efficient than a pre-determined fixed-rate nutrient addition strategy. However, the vast majority of available ...
Some Generalizations of Jungck's Fixed Point Theorem
J. R. Morales
2012-01-01
Full Text Available We are going to generalize the Jungck's fixed point theorem for commuting mappings by mean of the concepts of altering distance functions and compatible pair of mappings, as well as, by using contractive inequalities of integral type and contractive inequalities depending on another function.
Tunnel Diode Discriminator with Fixed Dead Time
Diamond, J. M.
1965-01-01
A solid state discriminator for the range 0.4 to 10 V is described. Tunnel diodes are used for the discriminator element and in a special fixed dead time circuit. An analysis of temperature stability is presented. The regulated power supplies are described, including a special negative resistance...
Fixed points and self-reference
Raymond M. Smullyan
1984-01-01
Full Text Available It is shown how Gödel's famous diagonal argument and a generalization of the recursion theorem are derivable from a common construation. The abstract fixed point theorem of this article is independent of both metamathematics and recursion theory and is perfectly comprehensible to the non-specialist.
Stress tolerant crops from nitrogen fixing trees
Becker, R.; Saunders, R.M.
1983-01-01
Notes are given on the nutritional quality and uses of: pods of Geoffroea decorticans, a species tolerant of saline and limed soils and saline water; seeds of Olneya tesota which nodulates readily and fixes nitrogen and photosynthesizes at low water potential; and pods of Prosopis chilensis and P. tamarugo which tolerate long periods without rain. 3 references.
Fixed Point Approach to Bagley Torvik Problem
Lale CONA
2017-10-01
Full Text Available In the present paper, a sufficient condition for existence and uniqueness of Bagley Torvik problem is obtained. The theorem on existence and uniqueness is established. This approach permits us to use fixed point iteration method to solve problem for differential equation involving derivatives of nonlinear order.
Radioisotope licence application: Fixed nuclear gauges
1995-09-01
This guide will assist you in completing and filing an application for a new licence or licence renewal for fixed nuclear gauges in accordance with the Atomic Energy Control Regulations and radioisotope licensing policies. It also provides some of the background information that you will require in order to safely use radioactive materials
The Deceptive Resilience of Fixed Exchange Rates
Mushin, Jerry
2004-01-01
This paper is an examination of the experience of exchange-rate systems since 1978. Despite the accelerating trend in favour of floating exchange rates, a substantial minority of IMF members have continued to fix the value of their currencies. The recent incidence of each of the principal types of exchange-rate peg is described.
Route Optimization for Offloading Congested Meter Fixes
Xue, Min; Zelinski, Shannon
2016-01-01
The Optimized Route Capability (ORC) concept proposed by the FAA facilitates traffic managers to identify and resolve arrival flight delays caused by bottlenecks formed at arrival meter fixes when there exists imbalance between arrival fixes and runways. ORC makes use of the prediction capability of existing automation tools, monitors the traffic delays based on these predictions, and searches the best reroutes upstream of the meter fixes based on the predictions and estimated arrival schedules when delays are over a predefined threshold. Initial implementation and evaluation of the ORC concept considered only reroutes available at the time arrival congestion was first predicted. This work extends previous work by introducing an additional dimension in reroute options such that ORC can find the best time to reroute and overcome the 'firstcome- first-reroute' phenomenon. To deal with the enlarged reroute solution space, a genetic algorithm was developed to solve this problem. Experiments were conducted using the same traffic scenario used in previous work, when an arrival rush was created for one of the four arrival meter fixes at George Bush Intercontinental Houston Airport. Results showed the new approach further improved delay savings. The suggested route changes from the new approach were on average 30 minutes later than those using other approaches, and fewer numbers of reroutes were required. Fewer numbers of reroutes reduce operational complexity and later reroutes help decision makers deal with uncertain situations.
A reliable, fast and low cost maximum power point tracker for photovoltaic applications
Enrique, J.M.; Andujar, J.M.; Bohorquez, M.A. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain)
2010-01-15
This work presents a new maximum power point tracker system for photovoltaic applications. The developed system is an analog version of the ''P and O-oriented'' algorithm. It maintains its main advantages: simplicity, reliability and easy practical implementation, and avoids its main disadvantages: inaccurateness and relatively slow response. Additionally, the developed system can be implemented in a practical way at a low cost, which means an added value. The system also shows an excellent behavior for very fast variables in incident radiation levels. (author)
Maximum entropy analysis of liquid diffraction data
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Automatic maximum entropy spectral reconstruction in NMR
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Hult, J; Mayer, S
2011-01-01
A general design of a laser light sheet module with adjustable focus is presented, where the maximum sheet width is preserved over a fixed region. In contrast, conventional focusing designs are associated with a variation in maximum sheet width with focal position. A four lens design is proposed here, where the first three lenses are employed for focusing, and the last for sheet expansion. A maximum sheet width of 1100 µm was maintained over a 50 mm long distance, for focal distances ranging from 75 to 500 mm, when a 532 nm laser beam with a beam quality factor M 2 = 29 was used for illumination
READ - Remote Analog ASIC Design System
Michael E. Auer
2006-11-01
Full Text Available The scope of this work is to present a solution to implement a remote electronic laboratory for testing and designing analog ASICs (ispPAC10. The application allows users to create circuit schematics, upload the design to the device and perform measurements. The software used for designing circuits is the PAC-Designer and it runs on a Citrix server. The signals are generated and the responses are acquired by a data acquisition board controlled by LabView. The virtual instruments interact with some ActiveX controls specially designed to look like real oscilloscope and function generator devices and represent the user interface of the lab. These ActiveX give users the control over the LabView VIs and the access to its facilities in order to perform electronic exercises.
Nilpotent Quantum Mechanics: Analogs and Applications
Peter Marcer
2017-07-01
Full Text Available The most significant characteristic of nilpotent quantum mechanics is that the quantum system (fermion state and its environment (vacuum are, in mathematical terms, mirror images of each other. So a change in one automatically leads to corresponding changes in the other. We have used this characteristic as a model for self-organization, which has applications well beyond quantum physics. The nilpotent structure has also been identified as being constructed from two commutative vector spaces. This zero square-root construction has a number of identifiable characteristics which we can expect to find in systems where self-organization is dominant, and a case presented after the publication of a paper by us on “The ‘Logic’ of Self-Organizing Systems” [1], in the organization of the neurons in the visual cortex. We expect to find many more complex systems where our general principles, based, by analogy, on nilpotent quantum mechanics, will apply.
Atwood and Poggendorff: an insightful analogy
Coelho, Ricardo; Borges, Paulo; Avelar Sotomaior Karam, Ricardo
2016-01-01
- librium provide us with the solution for a compound Atwood machine with the same bodies. This analogy is pedagogically useful because it illustrates a common strategy to transform a dynamic in a static situation improving stu- dents’ comprehension of Newton’s laws and equilibrium.......Atwood’s treatise, in which the Atwood machine appears, was published in 1784. About 70 years later, Poggendorff showed experimentally that the weight of an Atwood machine is reduced when it is brought to motion. In the present paper, a twofold connection between this experiment and the Atwood...... machine is established. Firstly, if the Poggendorff apparatus is taken as an ideal one, the equations of motion of the apparatus coincide with the equations of motion of the compound Atwood machine. Secondly, if the Poggendorff apparatus, which works as a lever, is rebalanced, the equations of this equi...
Atwood and Poggendorff: an insightful analogy
Coelho, R. L.; Borges, P. F.; Karam, R.
2016-11-01
Atwood’s treatise, in which the Atwood machine appears, was published in 1784. About 70 years later, Poggendorff showed experimentally that the weight of an Atwood machine is reduced when it is brought to motion. In the present paper, a twofold connection between this experiment and the Atwood machine is established. Firstly, if the Poggendorff apparatus is taken as an ideal one, the equations of motion of the apparatus coincide with the equations of motion of the compound Atwood machine. Secondly, if the Poggendorff apparatus, which works as a lever, is rebalanced, the equations of this equilibrium provide us with the solution for a compound Atwood machine with the same bodies. This analogy is pedagogically useful because it illustrates a common strategy to transform a dynamic in a static situation improving students’ comprehension of Newton’s laws and equilibrium.
Analog Electronic Filters Theory, Design and Synthesis
Dimopoulos, Hercules G
2012-01-01
Filters are essential subsystems in a huge variety of electronic systems. Filter applications are innumerable; they are used for noise reduction, demodulation, signal detection, multiplexing, sampling, sound and speech processing, transmission line equalization and image processing, to name just a few. In practice, no electronic system can exist without filters. They can be found in everything from power supplies to mobile phones and hard disk drives and from loudspeakers and MP3 players to home cinema systems and broadband Internet connections. This textbook introduces basic concepts and methods and the associated mathematical and computational tools employed in electronic filter theory, synthesis and design. This book can be used as an integral part of undergraduate courses on analog electronic filters. Includes numerous, solved examples, applied examples and exercises for each chapter. Includes detailed coverage of active and passive filters in an independent but correlated manner. Emphasizes real filter...
Large-scale digitizer system, analog converters
Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.
1976-10-01
Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved
Principles of digital and analog communications
Gibson, Jerry D
1993-01-01
This textbook for the first course in communications covers analog and digital systems and emphasizes digital communications. It covers data transmission, signal space, optimal receivers, and pulse code modulation, and includes readable treatments of coded modulation and continuous phase modulation. Advanced mathematics is kept to a minimum-Fourier series, Fourier transforms, linear systems, random variables, and stochastic process are described thoroughly. It includes data compression of speech and images and a full chapter coverage of information theory, rate distortion theory and coded modulation. It relates digital communications theory to current practice and covers digital communications over band-width constrained channels, including pulse shaping and equilization. -- Dieser Text bezieht sich auf eine vergriffene oder nicht verfügbare Ausgabe dieses Titels.
Integrated Circuits for Analog Signal Processing
2013-01-01
This book presents theory, design methods and novel applications for integrated circuits for analog signal processing. The discussion covers a wide variety of active devices, active elements and amplifiers, working in voltage mode, current mode and mixed mode. This includes voltage operational amplifiers, current operational amplifiers, operational transconductance amplifiers, operational transresistance amplifiers, current conveyors, current differencing transconductance amplifiers, etc. Design methods and challenges posed by nanometer technology are discussed and applications described, including signal amplification, filtering, data acquisition systems such as neural recording, sensor conditioning such as biomedical implants, actuator conditioning, noise generators, oscillators, mixers, etc. Presents analysis and synthesis methods to generate all circuit topologies from which the designer can select the best one for the desired application; Includes design guidelines for active devices/elements...
Four-gate transistor analog multiplier circuit
Mojarradi, Mohammad M. (Inventor); Blalock, Benjamin (Inventor); Cristoloveanu, Sorin (Inventor); Chen, Suheng (Inventor); Akarvardar, Kerem (Inventor)
2011-01-01
A differential output analog multiplier circuit utilizing four G.sup.4-FETs, each source connected to a current source. The four G.sup.4-FETs may be grouped into two pairs of two G.sup.4-FETs each, where one pair has its drains connected to a load, and the other par has its drains connected to another load. The differential output voltage is taken at the two loads. In one embodiment, for each G.sup.4-FET, the first and second junction gates are each connected together, where a first input voltage is applied to the front gates of each pair, and a second input voltage is applied to the first junction gates of each pair. Other embodiments are described and claimed.
HIGH RESOLUTION ANALOG / DIGITAL POWER SUPPLY CONTROLLER
Medvedko, Evgeny A
2003-01-01
Corrector magnets for the SPEAR-3 synchrotron radiation source require precision, high-speed control for use with beam-based orbit feedback. A new Controller Analog/Digital Interface card (CANDI) has been developed for these purposes. The CANDI has a 24-bit DAC for current control and three 24-bit Δ-Σ ADCs to monitor current and voltages. The ADCs can be read and the DAC updated at the 4 kHz rate needed for feedback control. A precision 16-bit DAC provides on-board calibration. Programmable multiplexers control internal signal routing for calibration, testing, and measurement. Feedback can be closed internally on current setpoint, externally on supply current, or beam position. Prototype and production tests are reported in this paper. Noise is better than 17 effective bits in a 10 mHz to 2 kHz bandwidth. Linearity and temperature stability are excellent
Quantum walks based on an interferometric analogy
Hillery, Mark; Bergou, Janos; Feldman, Edgar
2003-01-01
There are presently two models for quantum walks on graphs. The ''coined'' walk uses discrete-time steps, and contains, besides the particle making the walk, a second quantum system, the coin, that determines the direction in which the particle will move. The continuous walk operates with continuous time. Here a third model for quantum walks is proposed, which is based on an analogy to optical interferometers. It is a discrete-time model, and the unitary operator that advances the walk one step depends only on the local structure of the graph on which the walk is taking place. This type of walk also allows us to introduce elements, such as phase shifters, that have no counterpart in classical random walks. Several examples are discussed
Automating analog design: Taming the shrew
Barlow, A.
1990-01-01
The pace of progress in the design of integrated circuits continues to amaze observers inside and outside of the industry. Three decades ago, a 50 transistor chip was a technological wonder. Fifteen year later, a 5000 transistor device would 'wow' the crowds. Today, 50,000 transistor chips will earn a 'not too bad' assessment, but it takes 500,000 to really leave an impression. In 1975 a typical ASIC device had 1000 transistors, took one year to first samples (and two years to production) and sold for about 5 cents per transistor. Today's 50,000 transistor gate array takes about 4 months from spec to silicon, works the first time, and sells for about 0.02 cents per transistor. Fifteen years ago, the single most laborious and error prone step in IC design was the physical layout. Today, most IC's never see the hand of a layout designer: and automatic place and route tool converts the engineer's computer captured schematic to a complete physical design using a gate array or a library of standard cells also created by software rather than by designers. CAD has also been a generous benefactor to the digital design process. The architect of today's digital systems creates the design using an RTL or other high level simulator. Then the designer pushes a button to invoke the logic synthesizer-optimizer tool. A fault analyzer checks the result for testability and suggests where scan based cells will improve test coverage. One obstinate holdout amidst this parade of progress is the automation of analog design and its reduction to semi-custom techniques. This paper investigates the application of CAD techniques to analog design.
23rd workshop on Advances in Analog Circuit Design
Baschirotto, Andrea; Makinwa, Kofi
2015-01-01
This book is based on the 18 tutorials presented during the 23rd workshop on Advances in Analog Circuit Design. Expert designers present readers with information about a variety of topics at the frontier of analog circuit design, serving as a valuable reference to the state-of-the-art, for anyone involved in analog circuit research and development. • Includes coverage of high-performance analog-to-digital and digital to analog converters, integrated circuit design in scaled technologies, and time-domain signal processing; • Provides a state-of-the-art reference in analog circuit design, written by experts from industry and academia; • Presents material in a tutorial-based format.
46 CFR 28.260 - Electronic position fixing devices.
2010-10-01
... Trade § 28.260 Electronic position fixing devices. Each vessel 79 feet (24 meters) or more in length must be equipped with an electronic position fixing device capable of providing accurate fixes for the... 46 Shipping 1 2010-10-01 2010-10-01 false Electronic position fixing devices. 28.260 Section 28...
48 CFR 1852.216-78 - Firm fixed price.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Firm fixed price. 1852.216... 1852.216-78 Firm fixed price. As prescribed in 1816.202-70, insert the following clause: Firm Fixed Price (DEC 1988) The total firm fixed price of this contract is $[Insert the appropriate amount]. (End...
12 CFR 701.36 - FCU ownership of fixed assets.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false FCU ownership of fixed assets. 701.36 Section... ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.36 FCU ownership of fixed assets. (a) Investment in Fixed Assets. (1) No Federal credit union with $1,000,000 or more in assets may invest in any fixed...
Fracture Characteristics Analysis of Double-layer Rock Plates with Both Ends Fixed Condition
S. R. Wang
2014-07-01
Full Text Available In order to research on the fracture and instability characteristics of double-layer rock plates with both ends fixed, the three-dimension computational model of double-layer rock plates under the concentrated load was built by using PFC3D technique (three-dimension particle flow code, and the mechanical parameters of the numerical model were determined based on the physical model tests. The results showed the instability process of the double-layer rock plates had four mechanical response phases: the elastic deformation stage, the brittle fracture of upper thick plate arching stage, two rock-arch bearing stage and two rock-arch failure stage; moreover, with the rock plate particle radius from small to large change, the maximum vertical force of double rock-arch appeared when the particle size was a certain value. The maximum vertical force showed an upward trend with the increase of the rock plate temperature, and in the case of the same thickness the maximum vertical force increased with the increase of the upper rock plate thickness. When the boundary conditions of double-layer rock plates changed from the hinged support to the fixed support, the maximum horizontal force observably decreased, and the maximum vertical force showed small fluctuations and then tended towards stability with the increase of cohesive strength of double-layer rock plates.
Analog circuit design a tutorial guide to applications and solutions
Williams, Jim
2011-01-01
* Covers the fundamentals of linear/analog circuit and system design to guide engineers with their design challenges. * Based on the Application Notes of Linear Technology, the foremost designer of high performance analog products, readers will gain practical insights into design techniques and practice. * Broad range of topics, including power management tutorials, switching regulator design, linear regulator design, data conversion, signal conditioning, and high frequency/RF design. * Contributors include the leading lights in analog design, Robert Dobkin, Jim Willia