Beltran, H.; Perez, E.; Chen, Zhe;
2009-01-01
This paper describes a Fixed Maximum Power Point analog control used in a step-down Pulse Width Modulated power converter. The DC/DC converter drives a DC motor used in small water pumping installations, without any electric storage device. The power supply is provided by PV panels working around...
Fixed pitch wind turbine control to generate the maximum power
Martinez Rodrigo, Fernando
This Doctoral Thesis firstly shows the state of the art about wind power, wind turbines and alternating current generators. A part is intended for the state of the art of the commercial small wind turbines: their applications, the technology used, the elements topology according to the application type, the investigation lines in this field, the political respects that have an influence in using or not small turbines, and lastly it analyses in detail four commercial small turbines. One chapter contains the models and equations of the alternating current generators used in the Doctoral Thesis, which are the induction generator and the permanent magnets generator. Other chapter explains some methods to control the alternating current generators speed. Chapter 7 is oriented to the induction machines speed estimators. These estimators will let to eliminate the generators speed sensor. In the Thesis, some of them are simulated to test their behaviour. It presents an original analysis, which is oriented to choose the most right estimators for such an application as small wind turbines. Chapter 8 introduces the control systems developed for wind turbines. They let to extract the maximum power for every wind speed. The base of all of them is the algorithm proposed in the Thesis. Some control systems are proposed for squirrel cage induction generators and permanent magnets generators, which use voltage source and current source schemes. Some of them use generator speed sensors and others use speed estimators. The schemes do not need wind speed sensor.
Maximum power point tracking for variable-speed fixed-pitch small wind turbines
Putrus, Ghanim; Narayana, Mahinsasa; Jovanovic, Milutin; Leung, Pak Sing
2009-01-01
Variable-speed, fixed-pitch wind turbines are required to optimize power output performance without the aerodynamic controls. A wind turbine generator system is operated such that the optimum points of wind rotor curve and electrical generator curve coincide. In order to obtain maximum power output of a wind turbine generator system, it is necessary to drive the wind turbine at an optimal rotor speed for a particular wind speed. In fixed-pitch variablespeed wind turbines, wind-rotor performan...
Fixed Parameter Evolutionary Algorithms and Maximum Leaf Spanning Trees: A Matter of Mutations
Kratsch, Stefan; Lehre, Per Kristian; Neumann, Frank; Oliveto, Pietro Simone
Evolutionary algorithms have been shown to be very successful for a wide range of NP-hard combinatorial optimization problems. We investigate the NP-hard problem of computing a spanning tree that has a maximal number of leaves by evolutionary algorithms in the context of fixed parameter...... tractability (FPT) where the maximum number of leaves is the parameter under consideration. Our results show that simple evolutionary algorithms working with an edge-set encoding are confronted with local optima whose size of the inferior neighborhood grows with the value of an optimal solution. Investigating...
Fixed-head star tracker magnitude calibration on the solar maximum mission
Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.
1990-01-01
The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.
Luraschi, Julien; Schimmel, Martin; Bernard, Jean-Pierre; Gallucci, German O; Belser, Urs Christophe; Muller, Frauke
2012-01-01
The aim of this study was to compare tactile sensitivity and maximum voluntary bite force (MBF) of edentulous patients with implant-supported fixed dental prostheses (IFDP/IFDPs) to those wearing complete dentures (CG-CC) and fully dentate subjects (CG-DD).
ZHANG Yang-zhu; HUANG Shun-hong; WAN Da-juan; HUANG Yun-xiang; ZHOU Wei-jun; ZOU Ying-bin
2007-01-01
In order to understand the status of fixed ammonium, fixed ammonium content, maximum capacity of ammonium fixation, and their influencing factors in major types of tillage soils of Hunan Province, China, were studied with sampling on fields, and laboratory incubation and determination. The main results are summarized as follows: (1) Content of fixed ammonium in the tested soils varies greatly with soil use pattern and the nature of parent material. For the paddy soils, it ranges from 135.4 ± 57.4 to 412.8±32.4 mg kg-1, with 304.7±96.7 mg kg-1 in average; while it ranges from 59.4 to 435.7 mg kg-1, with 230.1 ± 89.2 mg kg1 in average for the upland soils. The soils developed from limnic material and slate had higher fixed ammonium content than the soils developed from granite. The percentage of fixed ammonium to total N in the upland soils is always higher than that in the paddy soils. It ranges from 6.1 ± 3.6% to 16.6 ±4.6%, with 14.0% ± 5.1% in average for the paddy soils and it amounted to 5.8 ±2.0% to 40.1 ± 17.8%, with 23.5 ± 14.2% in average for upland soils. (2) The maximum capacity of ammonium fixation has the same trend with the fixed ammonium content in the tested soils. For all the tested soils, the percentage of recently fixed ammonium to maximum capacity of ammonium fixation is always bellow 20% and it may be due to the fact that the soils have high fertility and high saturation of ammonium-fixing site. (3) The clay content and clay composition in the tested soils are the two important factors influe ncing their fixed ammonium content and maximum capacity of ammonium fixation. The results showed that hydrous mica is the main 2:1 type clay mineral in ＜0.02 mm clay of the paddy soils, and its content in 0.02-0.002 mm clay is much higher than that in ＜ 0.002 mm clay of the soils. The statistical analysis showed that both the fixed ammonium content and the maximum capacity of ammonium fixation of the paddy soils were positively correlated with
Veberic, Darko
2011-01-01
We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining ("gluing") methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.
Thompson, R. H.; Gambardella, P. J.
1980-01-01
The Solar Maximum Mission (SMM) spacecraft provides an excellent opportunity for evaluating attitude determination accuracies achievable with tracking instruments such as fixed head star trackers (FHSTs). As a part of its payload, SMM carries a highly accurate fine pointing Sun sensor (FPSS). The EPSS provides an independent check of the pitch and yaw parameters computed from observations of stars in the FHST field of view. A method to determine the alignment of the FHSTs relative to the FPSS using spacecraft data is applied. Two methods that were used to determine distortions in the 8 degree by 8 degree field of view of the FHSTs using spacecraft data are also presented. The attitude determination accuracy performance of the in flight calibrated FHSTs is evaluated.
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
Costa VP
2012-05-01
Full Text Available Vital Paulino Costa1, Hamilton Moreira2, Mauricio Della Paolera3, Maria Rosa Bet de Moraes Silva41Universidade Estadual de Campinas – UNICAMP, São Paulo, 2Universidade Federal do Paraná, Curitiba, 3Santa Casa de Misericórdia de São Paulo, São Paulo, 4Faculdade de Medicina de Botucatu, UNESP, BrazilPurpose: To assess the safety and efficacy of transitioning patients whose intraocular pressure (IOP had been insufficiently controlled on prostaglandin analog (PGA monotherapy to treatment with travoprost 0.004%/timolol 0.5% fixed combination with benzalkonium chloride (TTFC.Methods: This prospective, multicenter, open-label, historical controlled, single-arm study transitioned patients who had primary open-angle glaucoma, pigment dispersion glaucoma, or ocular hypertension and who required further IOP reduction from PGA monotherapy to once-daily treatment with TTFC for 12 weeks. IOP and safety (adverse events, corrected distance visual acuity, and slit-lamp biomicroscopy were assessed at baseline, week 4, and week 12. A solicited ocular symptom survey was administered at baseline and at week 12. Patients and investigators reported their medication preference at week 12.Results: Of 65 patients enrolled, 43 had received prior travoprost therapy and 22 had received prior nontravoprost therapy (n = 18, bimatoprost; n = 4, latanoprost. In the total population, mean IOP was significantly reduced from baseline (P = 0.000009, showing a 16.8% reduction after 12 weeks of TTFC therapy. In the study subgroups, mean IOP was significantly reduced from baseline to week 12 (P = 0.0001 in the prior travoprost cohort (19.0% reduction and in the prior nontravoprost cohort (13.1% reduction. Seven mild, ocular, treatment-related adverse events were reported. Of the ten ocular symptom questions, eight had numerically lower percentages with TTFC compared with prior PGA monotherapy and two had numerically higher percentages with TTFC (dry eye symptoms and ocular
Anto Joseph
2013-10-01
Full Text Available In this research article we have proposed a new analog MPPT regulator with the high efficiency DC-DC converter for the photo voltaic and high efficient z- source converter for the variable speed wind energy systems. The both renewable energy output power is connected in parallel with the diesel generator and whole system provide the efficient hybrid energy systems to given the electrical power to the external grid. The MPPT regulator provides the control signal for the DC-DC converter and tracks the maximum power from the solar panel. In which here a logic truth table based perturbation and observation (P and O algorithm used for the maximum power point tracking (MPPT and hybrid bridge resonant DC-DC converter is giving the constant output voltage equal to the DC bus voltage by changing the proper modes. The parallel configuration is selected for the energy transformation from the solar panel, wind power and diesel systems to the load. The design includes a bidirectional inverter along with a dc-dc converter capable of interfacing a battery bank with the AC bus. The goals of the project included the implementation of two modes of operation: a battery discharge mode where current is being fed into the AC bus and a battery charging mode in which current is pulled from the grid and put into the batteries. A secondary goal of the design was to ensure that the current being injected into AC bus was at or near unity power factor by utilizing a hysteresis current control method.
Anto Joseph; Godwin Jam
2013-01-01
In this research article we have proposed a new analog MPPT regulator with the high efficiency DC-DC converter for the photo voltaic and high efficient z- source converter for the variable speed wind energy systems. The both renewable energy output power is connected in parallel with the diesel generator and whole system provide the efficient hybrid energy systems to given the electrical power to the external grid. The MPPT regulator provides the control signal for the DC-DC converter and tra...
Vedtofte, Louise; Knop, Filip K; Vilsbøll, Tina
2015-01-01
Insulin therapy in the management of Type 2 diabetes is often postponed and/or not adequately intensified to maintain glycemic control because of the risk of weight gain and hypoglycemia. A fixed combination of the long-acting insulin degludec and liraglutide has recently been accepted by the EMA...
Reynolds analogy for the Rayleigh problem at various flow modes.
Abramov, A A; Butkovskii, A V
2016-07-01
The Reynolds analogy and the extended Reynolds analogy for the Rayleigh problem are considered. For a viscous incompressible fluid we derive the Reynolds analogy as a function of the Prandtl number and the Eckert number. We show that for any positive Eckert number, the Reynolds analogy as a function of the Prandtl number has a maximum. For a monatomic gas in the transitional flow regime, using the direct simulation Monte Carlo method, we investigate the extended Reynolds analogy, i.e., the relation between the shear stress and the energy flux transferred to the boundary surface, at different velocities and temperatures. We find that the extended Reynolds analogy for a rarefied monatomic gas flow with the temperature of the undisturbed gas equal to the surface temperature depends weakly on time and is close to 0.5. We show that at any fixed dimensionless time the extended Reynolds analogy depends on the plate velocity and temperature and undisturbed gas temperature mainly via the Eckert number. For Eckert numbers of the order of unity or less we generalize an extended Reynolds analogy. The generalized Reynolds analogy depends mainly only on dimensionless time for all considered Eckert numbers of the order of unity or less. PMID:27575220
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...
Poelstra, Klaas; Prakash, Jai; Beljaars, Eleonora; Bansal, Ruchi
2015-01-01
The invention relates to the field of medicine. Among others, it relates to biologically active analogs of interferons (IFNs) which show less unwanted side-effects and to the therapeutic uses thereof. Provided is an IFN analog, wherein the moiety mediating binding to its natural receptor is at least
Poelstra, Klaas; Prakash, Jai; Beljaars, Leonie; Bansal, Ruchi
2010-01-01
The invention relates to the field of medicine. Among others, it relates to biologically active analogs of interferons (IFNs) which show less unwanted side-effects and to the therapeutic uses thereof. Provided is an IFN analog, wherein the moiety mediating binding to its natural receptor is at least
Hofmann, R.B. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)
1995-09-01
Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.
2008-01-01
Ansambel Fix peab 13. detsembril Tallinnas Saku Suurhallis oma 40. sünnipäeva. Kontserdi erikülaline on ansambel Apelsin, kaastegevad Jassi Zahharov ja HaleBopp Singers. Õhtut juhib Tarmo Leinatamm
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
A portable storage maximum thermometer
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system
The invention relates to devices for modelling the space-dependent kinetics of a nuclear reactor. It can be advantageously used in studying the dynamics of the neutron field in the core to determine the effect of the control rods on the power distribution in the core, for training purposes. The proposed analog model of a nuclear reactor comprises operational amplifiers and a grid of resistors simulating neutron diffusion. Connected to the grid nodes are supply resistors modelling absorption and multiplication of neutrons. This is achieved by that, in the proposed model, all resistors through which power is supplied to the grid nodes are interconnected by their other leads and coupled to the output of the amplifier unit common for all nodes. Therewith, the amlifier unit models the transfer function of a ''point'' reactor. Connected to the input of this unit which includes two to four amplifiers are resistors for addition of signals with a grid node. Coupled to the grid nodes via additional resistors are voltage sources simulating reactivity
Chen, Wai-Kai
2009-01-01
Featuring hundreds of illustrations and references, this book provides the information on analog and VLSI circuits. It focuses on analog integrated circuits, presenting the knowledge on monolithic device models, analog circuit cells, high performance analog circuits, RF communication circuits, and PLL circuits.
Science Teachers' Analogical Reasoning
Mozzer, Nilmara Braga; Justi, Rosária
2013-08-01
Analogies can play a relevant role in students' learning. However, for the effective use of analogies, teachers should not only have a well-prepared repertoire of validated analogies, which could serve as bridges between the students' prior knowledge and the scientific knowledge they desire them to understand, but also know how to introduce analogies in their lessons. Both aspects have been discussed in the literature in the last few decades. However, almost nothing is known about how teachers draw their own analogies for instructional purposes or, in other words, about how they reason analogically when planning and conducting teaching. This is the focus of this paper. Six secondary teachers were individually interviewed; the aim was to characterize how they perform each of the analogical reasoning subprocesses, as well as to identify their views on analogies and their use in science teaching. The results were analyzed by considering elements of both theories about analogical reasoning: the structural mapping proposed by Gentner and the analogical mechanism described by Vosniadou. A comprehensive discussion of our results makes it evident that teachers' content knowledge on scientific topics and on analogies as well as their pedagogical content knowledge on the use of analogies influence all their analogical reasoning subprocesses. Our results also point to the need for improving teachers' knowledge about analogies and their ability to perform analogical reasoning.
Intuitive analog circuit design
Thompson, Marc
2013-01-01
Intuitive Analog Circuit Design outlines ways of thinking about analog circuits and systems that let you develop a feel for what a good, working analog circuit design should be. This book reflects author Marc Thompson's 30 years of experience designing analog and power electronics circuits and teaching graduate-level analog circuit design, and is the ideal reference for anyone who needs a straightforward introduction to the subject. In this book, Dr. Thompson describes intuitive and ""back-of-the-envelope"" techniques for designing and analyzing analog circuits, including transistor amplifi
Fixed point resolution in extended WZW models
A formula is derived for the fixed point resolution matrices of simple current extended WZW models and coset conformal field theories. Unlike the analogous matrices for unextended WZW models, these matrices are in general not symmetric, and they may have field-dependent twists. They thus provide non-trivial realizations of the general conditions presented in earlier work with Fuchs and Schweigert
The price of fixed income market volatility
Mele, Antonio
2015-01-01
Fixed income volatility and equity volatility evolve heterogeneously over time, co-moving disproportionately during periods of global imbalances and each reacting to events of different nature. While the methodology for options-based "model-free" pricing of equity volatility has been known for some time, little is known about analogous methodologies for pricing various fixed income volatilities. This book fills this gap and provides a unified evaluation framework of fixed income volatility while dealing with disparate markets such as interest-rate swaps, government bonds, time-deposits and credit. It develops model-free, forward looking indexes of fixed-income volatility that match different quoting conventions across various markets, and uncovers subtle yet important pitfalls arising from naïve superimpositions of the standard equity volatility methodology when pricing various fixed income volatilities. The ultimate goal of the authors´ efforts is to make interest rate volatility standardization a valuable...
Analogies between Scaling in Turbulence, Field Theory and Critical Phenomena
Eyink, Gregory; Goldenfeld, Nigel
1994-01-01
We discuss two distinct analogies between turbulence and field theory. In one analogue, the field theory has an infrared attractive renormalization-group fixed point and corresponds to critical phenomena. In the other analogue, the field theory has an ultraviolet attractive fixed point, as in quantum chromodynamics.
Hyndman, D E
2013-01-01
Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl
Hamada, Yuta; Kawana, Kiyoharu
2014-01-01
We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value $v_{h}$. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self coupling are fixed when we vary $v_{h}$. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental low in our case.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
A Complete Renormalization Group Trajectory Between Two Fixed Points
Abdesselam, Abdelmalek
2006-01-01
We give a rigorous nonperturbative construction of a massless discrete trajectory for Wilson's exact renormalization group. The model is a three dimensional Euclidean field theory with a modified free propagator. The trajectory realizes the mean field to critical crossover from the ultraviolet Gaussian fixed point to an analog recently constructed by Brydges, Mitter and Scoppola of the Wilson-Fisher nontrivial fixed point.
The use of neural networks in maximum entropy image restoration
Bedini, Luigi; Tonazzini, Anna
1988-01-01
The computational properties of analog e1ectrical circuits, recently proposed as models for highly-interconnected networks of non-linear analog neurons, have been found attractive in solving optimization problems. The computation power of these circuits is based on the high connectivity typical of neural systems and on the convergence speed of analog electric circuits in reaching stable states. In this paper we consider the problem of image restoration using the Maximum Entropy (ME) method. T...
Laura eSciacca
2012-02-01
Full Text Available Today, insulin analogs are used in millions of diabetic patients. Insulin analogs have been developed to achieve more physiological insulin replacement in terms of time course of the effect. Modifications in the amino acid sequence of the insulin molecule change the pharmacokinetics and pharmacodynamics of the analogs in respect to human insulin. However, these changes can also modify the molecular and biological effects of the analogs. The rapid-acting insulin analogs, lispro, aspart and glulisine, have a rapid onset and shorter duration of action. The long-acting insulin analogs glargine and detemir have a protracted duration of action and a relatively smooth serum concentration profile. Insulin and its analogs may function as growth factors and therefore have a theoretical potential to promote tumor proliferation. A major question is whether analogs have an increased mitogenic activity in respect to insulin. These ligands can promote cell proliferation through many mechanisms like the prolonged stimulation of the insulin receptor, stimulation of the IGF-1 receptor (IGF-1R, prevalent activation of the ERK rather than the AKT intracellular post-receptor pathways. Studies on in vitro models indicate that short-acting analogs elicit molecular and biological effects that are similar to those of insulin. In contrast, long-acting analogs behave differently. Although not all data are homogeneous, both glargine and detemir have been found to have a decreased binding to IR but an increased binding to IGF-1R, a prevalent activation of the ERK pathway, and an increased mitogenic effect in respect to insulin. Recent retrospective epidemiological clinical studies have suggested that treatment with long-acting analogs (specifically glargine may increase the relative risk for cancer. Results are controversial and methodologically weak. Therefore prospective clinical studies are needed to evaluate the possible tumor growth-promoting effects of these insulin
Holyoak, Keith J.; Thagard, P.
1997-01-01
We examine the use of analogy in human thinking from the perspective of a multiconstraint theory, which postulates three basic types of constraints: similarity, structure and purpose. The operation of these constraints is apparent in both laboratory experiments on analogy and in naturalistic settings, including politics, psychotherapy, and scientific research. We sketch how the multiconstraint theory can be implemented in detailed computational simulations of the analogical human mind.
Dobkin, Bob
2012-01-01
Analog circuit and system design today is more essential than ever before. With the growth of digital systems, wireless communications, complex industrial and automotive systems, designers are being challenged to develop sophisticated analog solutions. This comprehensive source book of circuit design solutions aids engineers with elegant and practical design techniques that focus on common analog challenges. The book's in-depth application examples provide insight into circuit design and application solutions that you can apply in today's demanding designs. <
Sarpeshkar, R
2014-03-28
We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations. PMID:24567476
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.;
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an...
Baser, Mustafa
2007-01-01
Students have difficulties in physics because of the abstract nature of concepts and principles. One of the effective methods for overcoming students' difficulties is the use of analogies to visualize abstract concepts to promote conceptual understanding. According to Iding, analogies are consistent with the tenets of constructivist learning…
Wessendorf, Kurt O.; Kemper, Dale A.
2003-06-03
A very low power analog pulse processing system implemented as an ASIC useful for processing signals from radiation detectors, among other things. The system incorporates the functions of a charge sensitive amplifier, a shaping amplifier, a peak sample and hold circuit, and, optionally, an analog to digital converter and associated drivers.
Analog computation with dynamical systems
Siegelmann, Hava T.; Fishman, Shmuel
1998-09-01
Physical systems exhibit various levels of complexity: their long term dynamics may converge to fixed points or exhibit complex chaotic behavior. This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d, Co-RP d, NP d and EXP d, corresponding to their standard counterparts, according to the complexity of their long term behavior. The complexity of chaotic attractors relative to regular ones leads to the conjecture P d ≠ NP d. Continuous time flows have been proven useful in solving various practical problems. Our theory provides the tools for an algorithmic analysis of such flows. As an example we analyze the continuous Hopfield network.
Malav, O P; Talukder, S; Gokulakrishnan, P; Chand, S
2015-01-01
The health-conscious consumers are in search of nutritious and convenient food item which can be best suited in their busy life. The vegetarianism is the key for the search of such food which resembles the meat in respect of nutrition and sensory characters, but not of animal origin and contains vegetable or its modified form, this is the point when meat analog evolved out and gets shape. The consumers gets full satisfaction by consumption of meat analog due to its typical meaty texture, appearance and the flavor which are being imparted during the skilled production of meat analog. The supplement of protein in vegetarian diet through meat alike food can be fulfilled by incorporating protein-rich vegetative food grade materials in meat analog and by adopting proper technological process which can promote the proper fabrication of meat analog with acceptable meat like texture, appearance, flavor, etc. The easily available vegetables, cereals, and pulses in India have great advantages and prospects to be used in food products and it can improve the nutritional and functional characters of the food items. The various form and functional characters of food items are available world over and attracts the meat technologists and the food processors to bring some innovativeness in meat analog and its presentation and marketability so that the acceptability of meat analog can be overgrown by the consumers. PMID:24915320
Troubleshooting analog circuits
Pease, Robert A
1991-01-01
Troubleshooting Analog Circuits is a guidebook for solving product or process related problems in analog circuits. The book also provides advice in selecting equipment, preventing problems, and general tips. The coverage of the book includes the philosophy of troubleshooting; the modes of failure of various components; and preventive measures. The text also deals with the active components of analog circuits, including diodes and rectifiers, optically coupled devices, solar cells, and batteries. The book will be of great use to both students and practitioners of electronics engineering. Other
Challenges in Analogical Reasoning
Lin, Shih-Yin
2016-01-01
Learning physics requires understanding the applicability of fundamental principles in a variety of contexts that share deep features. One way to help students learn physics is via analogical reasoning. Students can be taught to make an analogy between situations that are more familiar or easier to understand and another situation where the same physics principle is involved but that is more difficult to handle. Here, we examine introductory physics students' ability to use analogies in solving problems involving Newton's second law. Students enrolled in an algebra-based introductory physics course were given a solved problem involving tension in a rope and were then asked to solve another problem for which the physics is very similar but involved a frictional force. They were asked to point out the similarities between the two problems and then use the analogy to solve the friction problem.
TV Analog Station Transmitters
Department of Homeland Security — This file is an extract from the Consolidated Database System (CDBS) licensed by the Media Bureau. It consists of Analog Television Stations (see Rule Part47 CFR...
Can mushrooms fix atmospheric nitrogen?
H S Jayasinghearachchi; Gamini Seneviratne
2004-09-01
It is generally reported that fungi like Pleurotus spp. can fix nitrogen (N2). The way they do it is still not clear. The present study hypothesized that only associations of fungi and diazotrophs can fix N2. This was tested in vitro. Pleurotus ostreatus was inoculated with a bradyrhizobial strain nodulating soybean and P. ostreatus with no inoculation was maintained as a control. At maximum mycelial colonization by the bradyrhizobial strain and biofilm formation, the cultures were subjected to acetylene reduction assay (ARA). Another set of the cultures was evaluated for growth and nitrogen accumulation. Nitrogenase activity was present in the biofilm, but not when the fungus or the bradyrhizobial strain was alone. A significant reduction in mycelial dry weight and a significant increase in nitrogen concentration were observed in the inoculated cultures compared to the controls. The mycelial weight reduction could be attributed to C transfer from the fungus to the bradyrhizobial strain, because of high C cost of biological N2 fixation. This needs further investigations using 14C isotopic tracers. It is clear from the present study that mushrooms alone cannot fix atmospheric N2. But when they are in association with diazotrophs, nitrogenase activity is detected because of the diazotrophic N2 fixation. It is not the fungus that fixes N2 as reported earlier. Effective N2 fixing systems, such as the present one, may be used to increase protein content of mushrooms. Our study has implications for future identification of as yet unidentified N2 systems occurring in the environment.
Hickman, Ian
2013-01-01
Analog Circuits Cookbook presents articles about advanced circuit techniques, components and concepts, useful IC for analog signal processing in the audio range, direct digital synthesis, and ingenious video op-amp. The book also includes articles about amplitude measurements on RF signals, linear optical imager, power supplies and devices, and RF circuits and techniques. Professionals and students of electrical engineering will find the book informative and useful.
Synthesis of Paclitaxel Analogs
Xu, Zhibing
2010-01-01
Paclitaxel is one of the most successful anti-cancer drugs, particularly in the treatment of breast cancer and ovarian cancer. For the investigation of the interaction between paclitaxel and MD-2 protein, and development of new antagonists for lipopolysaccharide, several C10 A-nor-paclitaxel analogs have been synthesized and their biological activities have been evaluated. In order to reduce the myelosuppression effect of the paclitaxel, several C3â ² and C4 paclitaxel analogs have been synth...
Ponto, Kate
2009-01-01
The Lefschetz fixed point theorem and its converse have many generalizations. One of these generalizations is to endomorphisms of a space relative to a fixed subspace. In this paper we define relative Lefschetz numbers and Reidemeister traces using traces in bicategories with shadows. We use the functoriality of this trace to identify different forms of these invariants and to prove a relative Lefschetz fixed point theorem and its converse.
Grall, Hervé
2010-01-01
We propose a method to characterize the fixed points described in Tarski's theorem for complete lattices. The method is deductive: the least and greatest fixed points are "proved" in some inference system defined from deduction rules. We also apply the method to two other fixed point theorems, a generalization of Tarski's theorem to chain-complete posets and Bourbaki-Witt's theorem. Finally, we compare the method with the traditional iterative method resorting to ordinals and the original imp...
Electrical Circuits and Water Analogies
Smith, Frederick A.; Wilson, Jerry D.
1974-01-01
Briefly describes water analogies for electrical circuits and presents plans for the construction of apparatus to demonstrate these analogies. Demonstrations include series circuits, parallel circuits, and capacitors. (GS)
This is a reusable system for fixing a nuclear reactor fuel rod to a support. An interlock cap is fixed to the fuel rod and an interlock strip is fixed to the support. The interlock cap has two opposed fingers, which are shaped so that a base is formed with a body part. The interlock strip has an extension, which is shaped so that this is rigidly fixed to the body part of the base. The fingers of the interlock cap are elastic in bending. To fix it, the interlock cap is pushed longitudinally on to the interlock strip, which causes the extension to bend the fingers open in order to engage with the body part of the base. To remove it, the procedure is reversed. (orig.)
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some
Halaby, Mohamed El; Abdalla, Areeg
2016-01-01
In this paper, we extend the Maximum Satisfiability (MaxSAT) problem to {\\L}ukasiewicz logic. The MaxSAT problem for a set of formulae {\\Phi} is the problem of finding an assignment to the variables in {\\Phi} that satisfies the maximum number of formulae. Three possible solutions (encodings) are proposed to the new problem: (1) Disjunctive Linear Relations (DLRs), (2) Mixed Integer Linear Programming (MILP) and (3) Weighted Constraint Satisfaction Problem (WCSP). Like its Boolean counterpart,...
Measures of Noncircularity and Fixed Points of Contractive Multifunctions
Marrero Isabel
2010-01-01
Full Text Available In analogy to the Eisenfeld-Lakshmikantham measure of nonconvexity and the Hausdorff measure of noncompactness, we introduce two mutually equivalent measures of noncircularity for Banach spaces satisfying a Cantor type property, and apply them to establish a fixed point theorem of Darbo type for multifunctions. Namely, we prove that every multifunction with closed values, defined on a closed set and contractive with respect to any one of these measures, has the origin as a fixed point.
Digital and analog communication systems
Shanmugam, K. S.
1979-01-01
The book presents an introductory treatment of digital and analog communication systems with emphasis on digital systems. Attention is given to the following topics: systems and signal analysis, random signal theory, information and channel capacity, baseband data transmission, analog signal transmission, noise in analog communication systems, digital carrier modulation schemes, error control coding, and the digital transmission of analog signals.
Analogical Reasoning in Geometry Education
Magdas, Ioana
2015-01-01
The analogical reasoning isn't used only in mathematics but also in everyday life. In this article we approach the analogical reasoning in Geometry Education. The novelty of this article is a classification of geometrical analogies by reasoning type and their exemplification. Our classification includes: analogies for understanding and setting a…
Electrical analogous in viscoelasticity
Ala, Guido; Di Paola, Mario; Francomano, Elisa; Li, Yan; Pinnola, Francesco P.
2014-07-01
In this paper, electrical analogous models of fractional hereditary materials are introduced. Based on recent works by the authors, mechanical models of materials viscoelasticity behavior are firstly approached by using fractional mathematical operators. Viscoelastic models have elastic and viscous components which are obtained by combining springs and dashpots. Various arrangements of these elements can be used, and all of these viscoelastic models can be equivalently modeled as electrical circuits, where the spring and dashpot are analogous to the capacitance and resistance, respectively. The proposed models are validated by using modal analysis. Moreover, a comparison with numerical experiments based on finite difference time domain method shows that, for long time simulations, the correct time behavior can be obtained only with modal analysis. The use of electrical analogous in viscoelasticity can better reveal the real behavior of fractional hereditary materials.
A Convnet for Non-maximum Suppression
Hosang, J.; Benenson, R.; Schiele, B.
2015-01-01
Non-maximum suppression (NMS) is used in virtually all state-of-the-art object detection pipelines. While essential object detection ingredients such as features, classifiers, and proposal methods have been extensively researched surprisingly little work has aimed to systematically address NMS. The de-facto standard for NMS is based on greedy clustering with a fixed distance threshold, which forces to trade-off recall versus precision. We propose a convnet designed to perform NMS of a given s...
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Zak, M.
1998-01-01
Quantum analog computing is based upon similarity between mathematical formalism of quantum mechanics and phenomena to be computed. It exploits a dynamical convergence of several competing phenomena to an attractor which can represent an externum of a function, an image, a solution to a system of ODE, or a stochastic process.
Štěpán, Pavel
Cluj : Mega, 2013 - (Felecan, O.), s. 379-383 ISBN 978-606-543-343-4. [Name and Naming /2./ Onomastics in Contemporary Public Space. Baia Mare (RO), 09.05.2013-11.05.2013] R&D Projects: GA ČR GPP406/12/P600 Institutional support: RVO:68378092 Keywords : onomastics * toponyms * analogy Subject RIV: AI - Linguistics
High-resolution distributed sampling of bandlimited fields with fixed-precision sensors
Kumar, Animesh; Ramchandran, Kannan
2007-01-01
The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an ...
Dawar, Anuj; Gurevich, Yuri
2002-01-01
We consider fixed point logics, i.e., extensions of first order predicate logic with operators defining fixed points. A number of such operators, generalizing inductive definitions, have been studied in the context of finite model theory, including nondeterministic and alternating operators. We review results established in finite model theory, and also consider the expressive power of the resulting logics on infinite structures. In particular, we establish the relationship between inflationa...
Fixed mobile convergence handbook
Ahson, Syed A
2010-01-01
From basic concepts to future directions, this handbook provides technical information on all aspects of fixed-mobile convergence (FMC). The book examines such topics as integrated management architecture, business trends and strategic implications for service providers, personal area networks, mobile controlled handover methods, SIP-based session mobility, and supervisory and notification aggregator service. Case studies are used to illustrate technical and systematic implementation of unified and rationalized internet access by fixed-mobile network convergence. The text examines the technolo
Picard Sequence and Fixed Point Results on b-Metric Spaces
Marta Demma
2015-01-01
Full Text Available We obtain some fixed point results for single-valued and multivalued mappings in the setting of a b-metric space. These results are generalizations of the analogous ones recently proved by Khojasteh, Abbas, and Costache.
Optimum quantum states for interferometers with fixed and moving mirrors
Luis Aina, Alfredo
2004-01-01
We address a systematic approach to the study of the optimum states reaching maximum resolution for interferometers with moving mirrors. We find a correspondence between the optimum states for interferometers with fixed and moving mirrors.
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Maximum abundant isotopes correlation
The neutron excess of the most abundant isotopes of the element shows an overall linear dependence upon the neutron number for nuclei between neutron closed shells. This maximum abundant isotopes correlation supports the arguments for a common history of the elements during nucleosynthesis. (Auth.)
Weak scale from the maximum entropy principle
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage Srad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether Srad actually becomes maximum at the observed values. In this paper, we regard Srad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh=O(300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh∼TBBN2/(Mplye5), where ye is the Yukawa coupling of electron, TBBN is the temperature at which the Big Bang nucleosynthesis starts, and Mpl is the Planck mass
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Terrestrial Spaceflight Analogs: Antarctica
Crucian, Brian
2013-01-01
Alterations in immune cell distribution and function, circadian misalignment, stress and latent viral reactivation appear to persist during Antarctic winterover at Concordia Station. Some of these changes are similar to those observed in Astronauts, either during or immediately following spaceflight. Others are unique to the Concordia analog. Based on some initial immune data and environmental conditions, Concordia winterover may be an appropriate analog for some flight-associated immune system changes and mission stress effects. An ongoing smaller control study at Neumayer III will address the influence of the hypoxic variable. Changes were observed in the peripheral blood leukocyte distribution consistent with immune mobilization, and similar to those observed during spaceflight. Alterations in cytokine production profiles were observed during winterover that are distinct from those observed during spaceflight, but potentially consistent with those observed during persistent hypobaric hypoxia. The reactivation of latent herpesviruses was observed during overwinter/isolation, that is consistently associated with dysregulation in immune function.
Everett, Keith R.
2005-01-01
The purpose of this project is to investigate the feasibility of and methodology for the development of a set of environmental analogs of operational Undersea Warfare (USW) areas within fleet training areas. It is primarily a discussion of the identification of parameters that characterize the tactical USW environment, prioritization of these parameters, identification of existing databases that contain these parameters and an outline of the processes required to extract the desired data fro...
Analogy, Explanation, and Proof
John eHummel
2014-11-01
Full Text Available People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic whose truth was not even known prior to the existence of the explanation (proof. What do the cognitive operations underlying the (inductive inference that the milk is sour have in common with the (deductive proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This small-seeming difference poses a challenge to the task of marshaling our understanding of analogical reasoning in the service of understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.
Stojković, Nino
2006-01-01
In this paper the Asymmetric Digital Subscriber Line (ADSL) analog front end (AFE) designs are described and compared. AFE is the part of ADSL modems most responsible for quality signal transmission over phone wires. It can be divided into the transmitting path (TX) circuitry, the receiving path (RX) circuitry and the hybrid network and transformer. The operations and realizations of each functional block are presented. There are the D/A converter, the filter and the line driver in the TX pat...
Caloz, Christophe; Gupta, Shulabh; Zhang, Qingfeng; Nikfal, Babak
2013-01-01
Analog signal processing (ASP) is presented as a systematic approach to address future challenges in high speed and high frequency microwave applications. The general concept of ASP is explained with the help of examples emphasizing basic ASP effects, such as time spreading and compression, chirping and frequency discrimination. Phasers, which represent the core of ASP systems, are explained to be elements exhibiting a frequency-dependent group delay response, and hence a nonlinear phase resp...
Kipping, David M; Henze, Chris; Teachey, Alex; Isaacson, Howard T; Petigura, Erik A; Marcy, Geoffrey W; Buchhave, Lars A; Chen, Jingjing; Bryson, Steve T; Sandford, Emily
2016-01-01
Decadal-long radial velocity surveys have recently started to discover analogs to the most influential planet of our solar system, Jupiter. Detecting and characterizing these worlds is expected to shape our understanding of our uniqueness in the cosmos. Despite the great successes of recent transit surveys, Jupiter analogs represent a terra incognita, owing to the strong intrinsic bias of this method against long orbital periods. We here report on the first validated transiting Jupiter analog, Kepler-167e (KOI-490.02), discovered using Kepler archival photometry orbiting the K4-dwarf KIC-3239945. With a radius of $(0.91\\pm0.02)$ $R_{\\mathrm{Jup}}$, a low orbital eccentricity ($0.06_{-0.04}^{+0.10}$) and an equilibrium temperature of $(131\\pm3)$ K, Kepler-167e bears many of the basic hallmarks of Jupiter. Kepler-167e is accompanied by three Super-Earths on compact orbits, which we also validate, leaving a large cavity of transiting worlds around the habitable-zone. With two transits and continuous photometric ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Spiral structure in galaxies: analogies
Kirkpatrick, R.C.
1976-01-01
The vortex analogy to galactic spiral structures is considered. Caution against carrying the analogy past its region of applicability is noted; and some experiments with vorticities are mentioned. (JFP)
Mattiussi, Claudio; Swiss Federal Institute of Technology in Lausanne (EPFL); Marbach, Daniel; Swiss Federal Institute of Technology in Lausanne (EPFL); Dürr, Peter; Swiss Federal Institute of Technology in Lausanne (EPFL); Floreano, Dario; Swiss Federal Institute of Technology in Lausanne (EPFL)
2008-01-01
A large class of systems of biological and technological relevance can be described as analog networks, that is, collections of dynamical devices interconnected by links of varying strength. Some examples of analog networks are genetic regulatory networks, metabolic networks, neural networks, analog electronic circuits, and control systems. Analog networks are typically complex systems which include nonlinear feedback loops and possess temporal dynamics at different timescales. When tackled b...
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Lynnes, Chris
2014-01-01
Three current search engines are queried for ozone data at the GES DISC. The results range from sub-optimal to counter-intuitive. We propose a method to fix dataset search by implementing a robust relevancy ranking scheme. The relevancy ranking scheme is based on several heuristics culled from more than 20 years of helping users select datasets.
ESD analog circuits and design
Voldman, Steven H
2014-01-01
A comprehensive and in-depth review of analog circuit layout, schematic architecture, device, power network and ESD design This book will provide a balanced overview of analog circuit design layout, analog circuit schematic development, architecture of chips, and ESD design. It will start at an introductory level and will bring the reader right up to the state-of-the-art. Two critical design aspects for analog and power integrated circuits are combined. The first design aspect covers analog circuit design techniques to achieve the desired circuit performance. The second and main aspect pres
Izadi, F A; Bagirov, G
2009-01-01
With its origins stretching back several centuries, discrete calculus is now an increasingly central methodology for many problems related to discrete systems and algorithms. The topics covered here usually arise in many branches of science and technology, especially in discrete mathematics, numerical analysis, statistics and probability theory as well as in electrical engineering, but our viewpoint here is that these topics belong to a much more general realm of mathematics; namely calculus and differential equations because of the remarkable analogy of the subject to this branch of mathemati
Lyon, Richard F.; Mead, Carver
1988-01-01
An engineered system that hears, such as a speech recognizer, can be designed by modeling the cochlea, or inner ear, and higher levels of the auditory nervous system. To be useful in such a system, a model of the cochlea should incorporate a variety of known effects, such as an asymmetric low-pass/bandpass response at each output channel, a short ringing time, and active adaptation to a wide range of input signal levels. An analog electronic cochlea has been built in CMOS VLSI technolog...
HAPS, a Handy Analog Programming System
Højberg, Kristian Søe
1975-01-01
HAPS (Hybrid Analog Programming System) is an analog compiler that can be run on a minicomputer in an interactive mode. Essentially HAPS is written in FORTRAN. The equations to be programmed for an ana log computer are read in by using a FORTRAN-like notation. The input must contain maximum...... and minimum values for the variables. The output file includes potentiometer coefficients and static-test 'measuring values.' The file format is fitted to an automatic potentiometer-setting and static-test program. Patch instructions are printed by HAPS. The article describes the principles of HAPS...... and emphasizes the limitations HAPS puts on equation structure, types of computing circuit, scaling, and static testing....
Analog fault diagnosis by inverse problem technique
Ahmed, Rania F.
2011-12-01
A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
The maximum number of minimal codewords in long codes
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.;
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...
Energy-Efficient Large-Scale Antenna Systems with Hybrid Digital-Analog Beamforming Structure
Shuangfeng Han; ChihLin I; Zhikun Xu; Qi Sun; Haibin Li
2015-01-01
A large⁃scale antenna system (LSAS) with digital beamforming is expected to significantly increase energy efficiency (EE) and spectral efficiency (SE) in a wireless communication system. However, there are many challenging issues related to calibration, en⁃ergy consumption, and cost in implementing a digital beamforming structure in an LSAS. In a practical LSAS deployment, hybrid digital⁃analog beamforming structures with active antennas can be used. In this paper, we investigate the optimal antenna configu⁃ration in an N × M beamforming structure, where N is the number of transceivers, M is the number of active antennas per trans⁃ceiver, where analog beamforming is introduced for individual transceivers and digital beamforming is introduced across all N transceivers. We analyze the green point, which is the point of maximum EE on the EE⁃SE curve, and show that the log⁃scale EE scales linearly with SE along a slope of ⁃lg2/N. We investigate the effect of M on EE for a given SE value in the case of fixed NM and independent N and M. In both cases, there is a unique optimal M that results in optimal EE. In the case of independent N and M, there is no optimal (N, M) combination for optimizing EE. The results of numerical simulations are provided, and these re⁃sults support our analysis.
Probable maximum flood control
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Bieber, J W; Engel, R; Gaisser, T K; Roesler, S; Stanev, T; Bieber, John W.; Engel, Ralph; Gaisser, Thomas K.; Roesler, Stefan; Stanev, Todor
1999-01-01
New measurements with good statistics will make it possible to observe the time variation of cosmic antiprotons at 1 AU through the approaching peak of solar activity. We report a new computation of the interstellar antiproton spectrum expected from collisions between cosmic protons and the interstellar gas. This spectrum is then used as input to a steady-state drift model of solar modulation, in order to provide predictions for the antiproton spectrum as well as the antiproton/proton ratio at 1 AU. Our model predicts a surprisingly large, rapid increase in the antiproton/proton ratio through the next solar maximum, followed by a large excursion in the ratio during the following decade.
Biomedical sensor design using analog compressed sensing
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.
K.A. Chernova
2010-06-01
Full Text Available One of the main socio-economic problems in Russia is the high cost and the poor condition of housing.Such goals as cost reduction, reducing installation time and increasing the service life of structures are accomplishing by creating new technologies of erecting buildings and developing ways ofquickconstruction, using different types of fixed formwork. One of themis textstone.Textstone is an artificial construction stone, containing on the outer surface the reinforcing fine-mesh shell with multifunctional properties, formed by the interwoven threads of a vigorous fixed formwork textile material (basalt, linen, silica and other glass yarns adhered by binding material. The innovative construction technology of production and installation of a new generation of textstone buildings has been registered as a brand TextStone. The fundamental difference between texstone and reinforced concrete and all known building materials is that the whole outer surface of solidified light binders is protected by strong, vigorous and fixed formwork made from inexpensive textile materials. Manufacturing textile shells allows using it as an internal finishing material, reducing or eliminating the cost of finishing work.The use of fixed textile construction shutters during the construction of buildings has obvious technical, economic, operational, sanitary and environmental benefits: short construction time (from 3 to 10 days, compact packaging and light weight of fabric shells, high fire resistance, frost resistance, ease of engineering services installation in the hollow communicating shells; minimal amount of finishing, roofing, heat and noise insulation works. Texstone is a durable solid monolithic construction that provides high viability and earthquakes, hurricanes wind, solar sultriness and frost resistance. Material complies with all sanitary and environmental requirements. Due to such physical, mechanical, operational, sanitary and ecological characteristics
Fixed field alternating gradient
Machida, Shinji
2013-01-01
The concept of a fixed field alternating gradient (FFAG) accelerator was invented in the 1950s. Although many studies were carried out up to the late 1960s, there has been relatively little progress until recently, when it received widespread attention as a type of accelerator suitable for very fast acceleration and for generating high-power beams. In this paper, we describe the principles and design procedure of a FFAG accelerator.
K.A. Chernova; N.V. Paranicheva
2010-01-01
One of the main socio-economic problems in Russia is the high cost and the poor condition of housing.Such goals as cost reduction, reducing installation time and increasing the service life of structures are accomplishing by creating new technologies of erecting buildings and developing ways ofquickconstruction, using different types of fixed formwork. One of themis textstone.Textstone is an artificial construction stone, containing on the outer surface the reinforcing fine-mesh shell with mu...
Cropp, Bethan; Liberati, Stefano; Turcati, Rodrigo
2016-06-01
In the analog gravity framework, the acoustic disturbances in a moving fluid can be described by an equation of motion identical to a relativistic scalar massless field propagating in curved space-time. This description is possible only when the fluid under consideration is barotropic, inviscid, and irrotational. In this case, the propagation of the perturbations is governed by an acoustic metric that depends algebrically on the local speed of sound, density, and the background flow velocity, the latter assumed to be vorticity-free. In this work we provide a straightforward extension in order to go beyond the irrotational constraint. Using a charged—relativistic and nonrelativistic—Bose–Einstein condensate as a physical system, we show that in the low-momentum limit and performing the eikonal approximation we can derive a d’Alembertian equation of motion for the charged phonons where the emergent acoustic metric depends on flow velocity in the presence of vorticity.
Ochoa, Agustin
2016-01-01
This book describes a consistent and direct methodology to the analysis and design of analog circuits with particular application to circuits containing feedback. The analysis and design of circuits containing feedback is generally presented by either following a series of examples where each circuit is simplified through the use of insight or experience (someone else’s), or a complete nodal-matrix analysis generating lots of algebra. Neither of these approaches leads to gaining insight into the design process easily. The author develops a systematic approach to circuit analysis, the Driving Point Impedance and Signal Flow Graphs (DPI/SFG) method that does not require a-priori insight to the circuit being considered and results in factored analysis supporting the design function. This approach enables designers to account fully for loading and the bi-directional nature of elements both in the feedback path and in the amplifier itself, properties many times assumed negligible and ignored. Feedback circuits a...
Pires, Bernardo Esteves
2010-01-01
The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.
Analogy-Based Expectation Equilibrium
Jehiel, P
2001-01-01
It is assumed that players bundle nodes in which other players must move into analogy classes, and players only have expectations about the average behavior in every class. A solution concept is proposed for multi-stage games with perfect information: at every node players choose best-responses to their analogy-based expectations, and expectations are correct on average over those various nodes pooled together into the same analogy classes. The approach is applied to a variety of games. It is...
Beginning analog electronics through projects
Singmin, Andrew
2001-01-01
Analog electronics is the simplest way to start a fun, informative, learning program. Beginning Analog Electronics Through Projects, Second Edition was written with the needs of beginning hobbyists and students in mind. This revision of Andrew Singmin's popular Beginning Electronics Through Projects provides practical exercises, building techniques, and ideas for useful electronics projects. Additionally, it features new material on analog and digital electronics, and new projects for troubleshooting test equipment.Published in the tradition of Beginning Electronics Through Projects an
Analog and digital signal processing
Baher, H.
The techniques of signal processing in both the analog and digital domains are addressed in a fashion suitable for undergraduate courses in modern electrical engineering. The topics considered include: spectral analysis of continuous and discrete signals, analysis of continuous and discrete systems and networks using transform methods, design of analog and digital filters, digitization of analog signals, power spectrum estimation of stochastic signals, FFT algorithms, finite word-length effects in digital signal processes, linear estimation, and adaptive filtering.
Load Cell Response Correction Using Analog Adaptive Techniques
Jafaripanah, Mehdi; Al-Hashimi, Bashir; White, Neil M.
2003-01-01
Load cell response correction can be used to speed up the process of measurement. This paper investigates the application of analog adaptive techniques in load cell response correction. The load cell is a sensor with an oscillatory output in which the measurand contributes to response parameters. Thus, a compensation filter needs to track variation in measurand whereas a simple, fixed filter is only valid at one load value. To facilitate this investigation, computer models for the load cell a...
Fixed type incore instrumentation system
The present invention concerns a fixed type incore instrumentation system for use in BWR type reactor. A sensitivity ratio of an axially disposed gamma thermometer detection portions is determined based on output signals when the heater incorporated in the gamma thermometer generates heat or when it does not generate heat, and the ratio of gamma heat generation amount of the detection portions is determined based on the above-mentioned sensitivity ratio. The absolute value of the gamma heat generation amount is determine so as to agree with radial components of the power distribution of the reactor core obtained using a physical model of the reactor core, and also determined so as to correspond to the output distribution just after disposing neutron detectors. It is monitored whether the change of the sensitivity ratio of the gamma thermometer detection portions is uniform for detection portions in axial direction or it is increased only in a certain detection portion to judge failure of incorporated heater or that of sensors. The detection portion as a standard of the sensitivity ratio is adapted to have the maximum output signal in the axial direction so as to minimize the error of output signals. (N.H.)
Isolated transfer of analog signals
Bezdek, T.
1974-01-01
Technique transfers analog signal levels across high isolation boundary without circuit performance being affected by magnetizing reactance or leakage inductance. Transfers of analog information across isolated boundary are made by interrupting signal flow, with switch, in such a manner as to produce alternating signal which is applied to transformer.
Drawing Analogies in Environmental Education
Affifi, Ramsey
2014-01-01
Reconsidering the origin, process, and outcomes of analogy-making suggests practices for environmental educators who strive to disengage humans from the isolating illusions of dichotomizing frameworks. We can view analogies as outcomes of developmental processes within which human subjectivity is but an element, threading our sense of self back…
Analog elements for transuranic chemistries
A chemical extraction technique for estimating the biologically available fraction of nonessential trace elements in soils has been developed. This procedure has been used in evaluating the uptake of naturally occurring transuranic analog elements from soils into several foodstuffs. The availability of the natural elements has been compared with the availability of their analog transuranics which have been derived from global fallout
Natural analog studies: Licensing perspective
Bradbury, J.W. [Nuclear Regulatory Commission, Washington, DC (United States)
1995-09-01
This report describes the licensing perspective of the term {open_quotes}natural analog studies{close_quotes} as used in CFR Part 60. It describes the misunderstandings related to its definition which has become evident during discussions at the U.S Nuclear Regulatory Commission meetings and tries to clarify the appropriate applications of natural analog studies to aspects of repository site characterization.
Fixed-bed pyrolysis of rapeseed (Brassica napus L.)
Fixed-bed slow and fast pyrolysis experiments have been conducted on a sample of rapeseed. The experiments were performed in two different pyrolysis reactors, namely a fixed-bed Heinze and a well-swept fixed-bed tubular retort to investigate the effects of heating rate, pyrolysis temperature, particle size, sweep gas velocity on the pyrolysis product yields and chemical compositions. The maximum oil yield of 51.7% was obtained in the Heinze reactor 550 deg. C, with a particle size range of +0.6-1.8 mm (sweep gas 100 cm3 min-1 N2) at a heating rate of 30 deg. C min-1. In the well-swept fixed-bed reactor, the maximum oil yield of 68% was obtained at a heating rate of 300 deg. C min-1. Chromatographic and spectroscopic studies on the pyrolytic oil showed that the oil obtained from rapeseed could be use as a renewable fuels and chemical feedstock
A series of brief notes were included with this presentation which highlighted certain aspects of contract management. Several petroleum companies have realized the benefits of taking advantage of contract personnel to control fixed G and A, manage the impacts on their organization, contain costs, to manage termination costs, and to fill gaps in lean personnel rosters. An independent contractor was described as being someone who is self employed, often with a variety of work experiences. The tax benefits and flexibility of contractor personnel were also described. Some liability aspects of hiring an independent contractor were also reviewed. The courts have developed the following 4 tests to help determine whether an individual is an employee or an independent contractor: (1) the control test, (2) the business integration test, (3) specific result test, and (4) the economic reality test
Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio
2015-12-01
Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.
Pelgrom, Marcel J M
2010-01-01
The design of an analog-to-digital converter or digital-to-analog converter is one of the most fascinating tasks in micro-electronics. In a converter the analog world with all its intricacies meets the realm of the formal digital abstraction. Both disciplines must be understood for an optimum conversion solution. In a converter also system challenges meet technology opportunities. Modern systems rely on analog-to-digital converters as an essential part of the complex chain to access the physical world. And processors need the ultimate performance of digital-to-analog converters to present the results of their complex algorithms. The same progress in CMOS technology that enables these VLSI digital systems creates new challenges for analog-to-digital converters: lower signal swings, less power and variability issues. Last but not least, the analog-to-digital converter must follow the cost reduction trend. These changing boundary conditions require micro-electronics engineers to consider their design choices for...
Molecular modeling of fentanyl analogs
LJILJANA DOSEN-MICOVIC
2004-11-01
Full Text Available Fentanyl is a highly potent and clinically widely used narcotic analgesic. A large number of its analogs have been synthesized, some of which (sufentanil and alfentanyl are also in clinical use. Theoretical studies, in recent years, afforded a better understanding of the structure-activity relationships of this class of opiates and allowed insight into the molecular mechanism of the interactions of fentanyl analogs with their receptors. An overview of the current computational techniques for modeling fentanyl analogs, their receptors and ligand-receptor interactions is presented in this paper.
Analog Systems for Gravity Duals
Hossenfelder, S.
2014-01-01
We show that analog gravity systems exist for charged, planar black holes in asymptotic Anti-de Sitter space. These black holes have been employed to describe, via the gauge-gravity duality, strongly coupled condensed matter systems on the boundary of AdS-space. The analog gravity system is a different condensed matter system that, in a suitable limit, describes the same bulk physics as the theory on the AdS boundary. This combination of the gauge-gravity duality and analog gravity therefore ...
Analog filters in nanometer CMOS
Uhrmann, Heimo; Zimmermann, Horst
2014-01-01
Starting from the basics of analog filters and the poor transistor characteristics in nanometer CMOS 10 high-performance analog filters developed by the authors in 120 nm and 65 nm CMOS are described extensively. Among them are gm-C filters, current-mode filters, and active filters for system-on-chip realization for Bluetooth, WCDMA, UWB, DVB-H, and LTE applications. For the active filters several operational amplifier designs are described. The book, furthermore, contains a review of the newest state of research on low-voltage low-power analog filters. To cover the topic of the book comprehensively, linearization issues and measurement methods for the characterization of advanced analog filters are introduced in addition. Numerous elaborate illustrations promote an easy comprehension. This book will be of value to engineers and researchers in industry as well as scientists and Ph.D students at universities. The book is also recommendable to graduate students specializing on nanoelectronics, microelectronics ...
Towards power centric analog design
Svensson, Christer
2015-01-01
Power consumption of analog systems is poorly understoodtoday, in contrast to the very well developed analysis of digitalpower consumption. We show that there is good opportunity todevelop also the analog power understanding to a similar levelas the digital. Such an understanding will have a large impact inthe design of future electronic systems, where low power consumptionwill be crucial. Eventually we may reach a power centricanalog design methodology.
Enumeration of Maximum Acyclic Hypergraphs
Jian-fang Wang; Hai-zhu Li
2002-01-01
Acyclic hypergraphs are analogues of forests in graphs. They are very useful in the design of databases. In this article, the maximum size of an acyclic hypergraph is determined and the number of maximum r-uniform acyclic hypergraphs of order n is shown to be ( n t-1 )(n(r-1)-r2 +2r)n-r-1.
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
A subjective supply–demand model: the maximum Boltzmann/Shannon entropy solution
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Fixed points of quantum gravity
Litim, D.F.(Department of Physics and Astronomy, University of Sussex, Brighton, BN1 9QH, UK)
2004-01-01
Euclidean quantum gravity is studied with renormalisation group methods. Analytical results for a non-trivial ultraviolet fixed point are found for arbitrary dimensions and gauge fixing parameter in the Einstein-Hilbert truncation. Implications for quantum gravity in four dimensions are discussed.
Gauge fixing and equivariant cohomology
Rogers, Alice [Department of Mathematics, King' s College, Strand, London WC2R 2LS (United Kingdom)
2005-10-07
The supersymmetric model developed by Witten (1982 J. Differ. Geom. 17 661-92) to study the equivariant cohomology of a manifold with an isometric circle action is derived from the BRST quantization of a simple classical model. The gauge-fixing process is carefully analysed, and demonstrates that different choices of gauge-fixing fermion can lead to different quantum theories.
Test signal generation for analog circuits
Burdiek, B.; Mathis, W.
2003-05-01
In this paper a new test signal generation approach for general analog circuits based on the variational calculus and modern control theory methods is presented. The computed transient test signals also called test stimuli are optimal with respect to the detection of a given fault set by means of a predefined merit functional representing a fault detection criterion. The test signal generation problem of finding optimal test stimuli detecting all faults form the fault set is formulated as an optimal control problem. The solution of the optimal control problem representing the test stimuli is computed using an optimization procedure. The optimization procedure is based on the necessary conditions for optimality like the maximum principle of Pontryagin and adjoint circuit equations.
Test signal generation for analog circuits
B. Burdiek
2003-01-01
Full Text Available In this paper a new test signal generation approach for general analog circuits based on the variational calculus and modern control theory methods is presented. The computed transient test signals also called test stimuli are optimal with respect to the detection of a given fault set by means of a predefined merit functional representing a fault detection criterion. The test signal generation problem of finding optimal test stimuli detecting all faults form the fault set is formulated as an optimal control problem. The solution of the optimal control problem representing the test stimuli is computed using an optimization procedure. The optimization procedure is based on the necessary conditions for optimality like the maximum principle of Pontryagin and adjoint circuit equations.
Crows spontaneously exhibit analogical reasoning.
Smirnova, Anna; Zorina, Zoya; Obozova, Tanya; Wasserman, Edward
2015-01-19
Analogical reasoning is vital to advanced cognition and behavioral adaptation. Many theorists deem analogical thinking to be uniquely human and to be foundational to categorization, creative problem solving, and scientific discovery. Comparative psychologists have long been interested in the species generality of analogical reasoning, but they initially found it difficult to obtain empirical support for such thinking in nonhuman animals (for pioneering efforts, see [2, 3]). Researchers have since mustered considerable evidence and argument that relational matching-to-sample (RMTS) effectively captures the essence of analogy, in which the relevant logical arguments are presented visually. In RMTS, choice of test pair BB would be correct if the sample pair were AA, whereas choice of test pair EF would be correct if the sample pair were CD. Critically, no items in the correct test pair physically match items in the sample pair, thus demanding that only relational sameness or differentness is available to support accurate choice responding. Initial evidence suggested that only humans and apes can successfully learn RMTS with pairs of sample and test items; however, monkeys have subsequently done so. Here, we report that crows too exhibit relational matching behavior. Even more importantly, crows spontaneously display relational responding without ever having been trained on RMTS; they had only been trained on identity matching-to-sample (IMTS). Such robust and uninstructed relational matching behavior represents the most convincing evidence yet of analogical reasoning in a nonprimate species, as apes alone have spontaneously exhibited RMTS behavior after only IMTS training. PMID:25532894
Analog electronics for radiation detection
2016-01-01
Analog Electronics for Radiation Detection showcases the latest advances in readout electronics for particle, or radiation, detectors. Featuring chapters written by international experts in their respective fields, this authoritative text: Defines the main design parameters of front-end circuitry developed in microelectronics technologies Explains the basis for the use of complementary metal oxide semiconductor (CMOS) image sensors for the detection of charged particles and other non-consumer applications Delivers an in-depth review of analog-to-digital converters (ADCs), evaluating the pros and cons of ADCs integrated at the pixel, column, and per-chip levels Describes incremental sigma delta ADCs, time-to-digital converter (TDC) architectures, and digital pulse-processing techniques complementary to analog processing Examines the fundamental parameters and front-end types associated with silicon photomultipliers used for single visible-light photon detection Discusses pixel sensors ...
Analog and digital simulation of the radiocardiogram
A mathematical model of the radiocardiogram has been developed to deal with the pulsatile component of the tracing. It is applicable to the bedside radiocardiogram, radionuclide angiocardiographic studies with the scintillation camera, or description of other tracer studies in the central circulation. The model consists of four heart chambers, each ejecting a fixed fraction of its contained tracer with each systole, and a lung delay function. Discrete-variable calculation of end-systolic and end-diastolic tracer content of the heart chambers and lung allowed development of simple, rapid programs for simulation by small digital computers. By this means, curve fitting and estimation of ejection fractions, end-diastolic volumes and mean lung delay may eventually be automated. A better understanding of the problems of extracting diagnostically useful information from such a multiparameter fit should result from study of these simulations. Families of characteristic curves were generated for several disorders where pattern recognition as well as parameter estimation is important. A small, light-weight, portable electronic analog simulator has been developed to permit the same simulation and trial-and-error parameter estimation at the bedside. It puts a real-time tracing onto the chart recorder used for actual radiocardiograms. Its design features are described. Some ideas unifying analog and digital modeling are expressed in differential equations. They provide a framework for future simulation of the radiocardiogram in the irregularly beating heart and an algorithm for potential extraction of detailed chamber-volume curves from the non-equilibrium portion of the radiocardiogram. (author)
Multilateral Collaborations in Analog Research
Cromwell, R. l.
2016-01-01
International collaborations in studies utilizing ground-based space flight analogs are an effective means for answering research questions common to participating agencies. These collaborations bring together worldwide experts to solve important space research questions. By collaborating unnecessary duplication of science is reduced, and the efficiency of analog use is improved. These studies also share resources among agencies for cost effective solutions to study implementation. Recently, NASA has engaged in collaborations with international partners at a variety of analog sites. The NASA Human Exploration Research Analog (HERA) is currently hosting investigator studies from NASA and from the German Space Agency (DLR). These isolation studies will answer questions in the areas of team cohesion, sleep and circadian rhythms, and neurobehavioral correlates to function. Planning for the next HERA campaign is underway as proposal selections are being made from the International Life Sciences Research Announcement (ILSRA). Studies selected from the ILSRA will be conducted across 4 HERA missions in 2017. NASA is planning collaborative studies with DLR at the :envihab facility in Cologne, Germany. Investigations were recently selected to study the effects of 0.5% CO2 exposure over 30 days of bed rest. These studies will help to determine the fidelity of this ground-based analog for studying the visual impairment intracranial pressure syndrome. NASA is also planning a multilateral collaboration at :envihab with DLR and the European Space Agency (ESA) to examine artificial gravity as a countermeasure to mitigate the effects of 60 days of bed rest. NASA is also considering collaborations with the Russian Institute for Biomedical Problems (IBMP) in studies that will utilize their Ground-based Experimental Facility (NEK). The NEK is comprised of 4 interconnected modules and a Martian surface simulator. This isolation analog can support 3 -10 crew members for long duration
Analogy between gambling and measurement-based work extraction
Vinkler, Dror A.; Permuter, Haim H.; Merhav, Neri
2016-04-01
In information theory, one area of interest is gambling, where mutual information characterizes the maximal gain in wealth growth rate due to knowledge of side information; the betting strategy that achieves this maximum is named the Kelly strategy. In the field of physics, it was recently shown that mutual information can characterize the maximal amount of work that can be extracted from a single heat bath using measurement-based control protocols, i.e. using ‘information engines’. However, to the best of our knowledge, no relation between gambling and information engines has been presented before. In this paper, we briefly review the two concepts and then demonstrate an analogy between gambling, where bits are converted into wealth, and information engines, where bits representing measurements are converted into energy. From this analogy follows an extension of gambling to the continuous-valued case, which is shown to be useful for investments in currency exchange rates or in the stock market using options. Moreover, the analogy enables us to use well-known methods and results from one field to solve problems in the other. We present three such cases: maximum work extraction when the probability distributions governing the system and measurements are unknown, work extraction when some energy is lost in each cycle, e.g. due to friction, and an analysis of systems with memory. In all three cases, the analogy enables us to use known results in order to obtain new ones.
Application handbook for analog IC
This book consists of ten chapters, which are prolog on analog IC and digital, basic function of OP amp for operation, characteristic on direct current and interchange, Op amp and linear circuit, nonlinear arithmetic circuit, filter circuit, oscillation circuit and V-F converter, D-A converter, A-D converter on introduction eight bit and 12 bit, I C for power supply and switching regulator. Each chapter has the explanations of specific function of the programs, filter circuit, converters. So, this book is a application handbook for analog IC.
Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become
International Alligator Rivers Analog Project
The Australian Nuclear Science and Technology Organization (ANSTO), the Japan Atomic Energy Research Institute, the Swedish Nuclear Power Inspectorate, the U.K. Department of the Environment, the US Nuclear Regulatory Commission (NRC), and the Power Reactor and Nuclear Fuel Development Corporation of Japan are participating under the aegis of the Nuclear Energy Agency in the International Alligator Rivers Analog Project. The project has a duration of 3 yr, starting in 1988. The project has grown out of a research program on uranium ore bodies as analogs of high-level waste (HLW) repositories undertaken by ANSTO supported by the NRC. A primary objective of the project is to develop an approach to radionuclide transport model validation that may be used by the participants to support assessments of the safety of radioactive waste repositories. The approach involves integrating mathematical and physical modeling with hydrological and geochemical field and laboratory investigations of the analog site. The Koongarra uranium ore body has been chosen as the analog site because it has a secondary ore body that has formed over the past million years as a result of leaching by groundwater flowing through fractures in the primary ore body
Multichannel analog temperature sensing system
A multichannel system that protects the numerous and costly water-cooled magnet coils on the translation section of the FRX-C/T magnetic fusion experiment is described. The system comprises a thermistor for each coil, a constant current circuit for each thermistor, and a multichannel analog-to-digital converter interfaced to the computer
Analog Input Data Acquisition Software
Arens, Ellen
2009-01-01
DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.
Multichannel analog temperature sensing system
Gribble, R.
1985-08-01
A multichannel system that protects the numerous and costly water-cooled magnet coils on the translation section of the FRX-C/T magnetic fusion experiment is described. The system comprises a thermistor for each coil, a constant current circuit for each thermistor, and a multichannel analog-to-digital converter interfaced to the computer.
The current situation of Heavy Flavor physics at fixed target experiments is reviewed. High statistics charm production and decay data are summarized and new results on beauty physics are presented. (author)
National Radiological Fixed Lab Data
U.S. Environmental Protection Agency — The National Radiological Fixed Laboratory Data Asset includes data produced in support of various clients such as other EPA offices, EPA Regional programs, DOE,...
Elevated Fixed Platform Test Facility
Federal Laboratory Consortium — The Elevated Fixed Platform (EFP) is a helicopter recovery test facility located at Lakehurst, NJ. It consists of a 60 by 85 foot steel and concrete deck built atop...
Fixed Points of abelian actions
Franks, John; Handel, Michael; Parwani, Kamlesh
2006-01-01
We prove that if $\\F$ is an abelian group of $C^1$ diffeomorphisms isotopic to the identity of a closed surface $S$ of genus at least two then there is a common fixed point for all elements of $\\F.$
Epsilon Nielsen fixed point theory
Brown, Robert F.
2006-01-01
Let be a map of a compact, connected Riemannian manifold, with or without boundary. For sufficiently small, we introduce an -Nielsen number that is a lower bound for the number of fixed points of all self-maps of that are -homotopic to . We prove that there is always a map that is -homotopic to such that has exactly fixed points. We describe procedures for calculating for maps of -manifolds.
The infra-red fixed points are determined for the parameters of the MSSM. They dominate the renormalisation group running when the top-Yukawa is in the quasi-fixed point regime (i.e. large at the GUT scale). We examine this behaviour analytically, by solving the full set of one-loop renormalisation group equations in the approximation that the electroweak contributions are negligible, and also numerically. (author)
Quasi Contraction and Fixed Points
Mohsen Alimohammady
2012-10-01
Full Text Available In this note, we establish and improve some results on fixed point theory in topological vector spaces. As a generalization of contraction maps, the concept of quasi contraction multivalued maps on a topological vector space will be defined. Further, it is shown that a quasi contraction and closed multivalued map on a topological vector space has a unique fixed point if it is bounded value.
Testing, monitoring, and dating structural changes in maximum likelihood models
Zeileis, Achim; Shah, Ajay; Patnaik, Ila
2008-01-01
A unified toolbox for testing, monitoring, and dating structural changes is provided for likelihood-based regression models. In particular, least-squares methods for dating breakpoints are extended to maximum likelihood estimation. The usefulness of all techniques is illustrated by assessing the stability of de facto exchange rate regimes. The toolbox is used for investigating the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar in 2005 and tracking t...
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2013-01-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Potts-like Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities th...
Hamiltonian system for orthotropic plate bending based on analogy theory
无
2001-01-01
Based on analogy between plane elasticity and plate bending as well as variational principles of mixed energy, Hamiltonian system is further led to orthotropic plate bending problems in this paper. Thus many effective methods of mathematical physics such as separation of variables and eigenfunction expansion can be employed in orthotropic plate bending problems as they are used in plane elasticity. Analytical solutions of rectangular plate are presented directly, which expands the range of analytical solutions. There is an essential distinction between this method and traditional semi-inverse method. Numerical results of orthotropic plate with two lateral sides fixed are included to demonstrate the effectiveness and accuracy of this method.
Funding human services: fixed utility versus fixed budget.
McCready, D J; Rahn, S L
1986-01-01
It is argued in this paper that government allocations for human services based on inputs rather than outcomes, reduce efficiency in social and health service provision. An alternative system of budgeting or contracting on the basis of cost-per-closed case and case outcome is discussed. An interdependency between fixed budget and fixed utility models of allocation is affirmed. The locus of decision-making for operationalizing this interdependency is seen as the program and budget review panel to which operating agencies and government departments must submit financial and program accounting information from year to year. In isolation, the fixed budget approach degenerates into routine allocation or contract renewal with a focus on such input and output variables as volume of service and unit cost, and the fixed utility approach, into political stalemate. Simulated examples are given to demonstrate how allocation on the basis of inputs and outputs alone provides an incentive to inefficiency, and a fixed utility orientation to efficiency. PMID:10311890
Analog circuit design art, science and personalities
Williams, Jim
1991-01-01
This book is far more than just another tutorial or reference guide - it's a tour through the world of analog design, combining theory and applications with the philosophies behind the design process. Readers will learn how leading analog circuit designers approach problems and how they think about solutions to those problems. They'll also learn about the `analog way' - a broad, flexible method of thinking about analog design tasks.A comprehensive and useful guide to analog theory and applications. Covers visualizing the operation of analog circuits. Looks at how to rap
Maximum entropy beam diagnostic tomography
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
Decomposition using Maximum Autocorrelation Factors
Larsen, Rasmus
2002-01-01
, normally we have an ordering of landmarks (variables) along the contour of the objects. For the case with observation ordering the maximum autocorrelation factor (MAF) transform was proposed for multivariate imagery in\\verb+~+\\$\\backslash\\$cite{switzer85}. This corresponds to a R-mode analyse of the data...
Synaptic dynamics in analog VLSI.
Bartolozzi, Chiara; Indiveri, Giacomo
2007-10-01
Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties. PMID:17716003
Mechanical Analogies of Fractional Elements
HU Kai-Xin; ZHU Ke-Qin
2009-01-01
A Fractional element model describes a special kind of viscoelastic material.Its stress is proportional to the fractional-order derivative of strain. Physically the mechanical analogies of fractional elements can be represented by spring-dashpot fractal networks. We introduce a constitutive operator in the constitutive equations of viscoelastic materials.To derive constitutive operators for spring-dashpot fractal networks, we use Heaviside operational calculus, which provides explicit answers not otherwise obtainable simply.Then the series-parallel formulas for the constitutive operator are derived. Using these formulas, a constitutive equation of fractional element with 1/2-order derivative is obtained.Finally we find the way to derive the constitutive equations with other fractional-order derivatives and their mechanical analogies.
The newly observed Z+(4433) resonance by BELLE is believed to be a tetraquark bound state made up of (cu)(cd). We propose the bottomed analog of this bound state, namely, by replacing one of the charm quarks by a bottom quark, thus forming Zbc0,±,±±. One of the Zbc is doubly charged. The predicted mass of Zbc is around 7.6 GeV. This doubly charged bound state can be detected by its decay into Bc±π±. Similarly, we can also replace both charm quark and antiquark of the Z+(4433) by bottom quark and antiquark, respectively, thus forming Zbb the bottomonium analog of Z+(4433). The predicted mass of Zbb is about 10.7 GeV
Analog Nonvolatile Computer Memory Circuits
MacLeod, Todd
2007-01-01
In nonvolatile random-access memory (RAM) circuits of a proposed type, digital data would be stored in analog form in ferroelectric field-effect transistors (FFETs). This type of memory circuit would offer advantages over prior volatile and nonvolatile types: In a conventional complementary metal oxide/semiconductor static RAM, six transistors must be used to store one bit, and storage is volatile in that data are lost when power is turned off. In a conventional dynamic RAM, three transistors must be used to store one bit, and the stored bit must be refreshed every few milliseconds. In contrast, in a RAM according to the proposal, data would be retained when power was turned off, each memory cell would contain only two FFETs, and the cell could store multiple bits (the exact number of bits depending on the specific design). Conventional flash memory circuits afford nonvolatile storage, but they operate at reading and writing times of the order of thousands of conventional computer memory reading and writing times and, hence, are suitable for use only as off-line storage devices. In addition, flash memories cease to function after limited numbers of writing cycles. The proposed memory circuits would not be subject to either of these limitations. Prior developmental nonvolatile ferroelectric memories are limited to one bit per cell, whereas, as stated above, the proposed memories would not be so limited. The design of a memory circuit according to the proposal must reflect the fact that FFET storage is only partly nonvolatile, in that the signal stored in an FFET decays gradually over time. (Retention times of some advanced FFETs exceed ten years.) Instead of storing a single bit of data as either a positively or negatively saturated state in a ferroelectric device, each memory cell according to the proposal would store two values. The two FFETs in each cell would be denoted the storage FFET and the control FFET. The storage FFET would store an analog signal value
Maximum efficiency of low-dissipation heat engines at arbitrary power
Holubec, Viktor; Ryabov, Artem
2016-07-01
We investigate maximum efficiency at a given power for low-dissipation heat engines. Close to maximum power, the maximum gain in efficiency scales as a square root of relative loss in power and this scaling is universal for a broad class of systems. For low-dissipation engines, we calculate the maximum gain in efficiency for an arbitrary fixed power. We show that engines working close to maximum power can operate at considerably larger efficiency compared to the efficiency at maximum power. Furthermore, we introduce universal bounds on maximum efficiency at a given power for low-dissipation heat engines. These bounds represent direct generalization of the bounds on efficiency at maximum power obtained by Esposito et al (2010 Phys. Rev. Lett. 105 150603). We derive the bounds analytically in the regime close to maximum power and for small power values. For the intermediate regime we present strong numerical evidence for the validity of the bounds.
A Global Analog of Cheshire Charge
McGraw, Patrick
1994-01-01
It is shown that a model with a spontaneously broken global symmetry can support defects analogous to Alice strings, and a process analogous to Cheshire charge exchange can take place. A possible realization in superfluid He-3 is pointed out.
Atheism and Analogy: Aquinas Against the Atheists
Linford, Daniel J.
2014-01-01
In the 13th century, Thomas Aquinas developed two models for how humans may speak of God - either by the analogy of proportion or by the analogy of proportionality. Aquinas's doctrines initiated a theological debate concerning analogy that spanned several centuries. In the 18th century, there appeared two closely related arguments for atheism which both utilized analogy for their own purposes. In this thesis, I show that one argument, articulated by the French materialist Paul-Henri Thiry Bar...
Padgett, Wayne T
2009-01-01
This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory
Fixed effects analysis of variance
Fisher, Lloyd; Birnbaum, Z W; Lukacs, E
1978-01-01
Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi
Fixed points and economic equilibria
Urai, Ken
2014-01-01
This book presents a systematic approach to problems in economic equilibrium based on fixed-point arguments and rigorous set-theoretical (axiomatic) methods. It describes the highest-level research on the classical theme, fixed points and economic equilibria, in the theory of mathematical economics, and also presents basic results in this area, especially in the general equilibrium theory and non-co-operative game theory. The arguments also contain distinguishable developments of the main theme in the homology theory for general topological spaces, in the model theory and mathematical logic, a
SYED SHAHNAWAZ ALI; JAINENDRA JAIN; ANIL RAJPUT
2013-01-01
The study of theory of fuzzy sets was initiated by Zadeh in 1965. Since then many authors have extended and developed the theory of fuzzy sets in the fields of topology and analysis. The notion of fuzzy metric spaces has very important applications in quantum particle physics. As a result many authors have extended the Banach’sContraction Principle to fuzzy metric spaces. Fixed point and common fixed point properties for mappings defined on fuzzy metric spaces have been studied by many author...
Maximum Power Point Regulator System
Simola, J.; Savela, K.; Stenberg, J.; Tonicello, F.
2011-10-01
The target of the study done under the ESA contract No.17830/04/NL/EC (GSTP4) for Maximum Power Point Regulator System (MPPRS) was to investigate, design and test a modular power system (a core PCU) fulfilling requirement for maximum power transfer even after a single failure in the Power System by utilising a power concept without any potential and credible single point failure. The studied MPPRS concept is of a modular construction, able to track the MPP individually on each SA sections, maintaining its functionality and full power capability after a loss of a complete MPPR module (by utilizingN+1module).Various add-on DCDC converter topology candidates were investigated and redundancy, failure mechanisms and protection aspects were studied
Maximum matching on random graphs
Zhou, Haijun; Ou-Yang, Zhong-Can
2003-01-01
The maximum matching problem on random graphs is studied analytically by the cavity method of statistical physics. When the average vertex degree \\mth{c} is larger than \\mth{2.7183}, groups of max-matching patterns which differ greatly from each other {\\em gradually} emerge. An analytical expression for the max-matching size is also obtained, which agrees well with computer simulations. Discussion is made on this {\\em continuous} glassy phase transition and the absence of such a glassy phase ...
Maximum-likelihood absorption tomography
Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Homogeneous determination of maximum magnitude
Meletti, C.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; D'Amico, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; Martinelli, F.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia
2010-01-01
This deliverable represents the result of the activities performed by a working group at INGV. The main object of the Task 3.5 is defined in the Description of Work. This task will produce a homogeneous assessment (possibly multiple models) of the distribution of the expected Maximum Magnitude for earthquakes expected in various tectonic provinces of Europe, to serve as input for the computation and validation of seismic hazard. This goal will be achieved by combining input from earthqu...
Indistinguishability, symmetrisation and maximum entropy
It is demonstrated that the distributions over single-particle states for Boltzmann, Bose-Einstein and Fermi-Dirac statistics describing N non-interacting identical particles follow directly from the principle of maximum entropy. It is seen that the notions of indistinguishability and coarse graining are secondary, if not irrelevant. A detailed examination of the structure of the Boltzmann limit is provided. (author)
Solar maximum: solar array degradation
The 5-year in-orbit power degradation of the silicon solar array aboard the Solar Maximum Satellite was evaluated. This was the first spacecraft to use Teflon R FEP as a coverglass adhesive, thus avoiding the necessity of an ultraviolet filter. The peak power tracking mode of the power regulator unit was employed to ensure consistent maximum power comparisons. Telemetry was normalized to account for the effects of illumination intensity, charged particle irradiation dosage, and solar array temperature. Reference conditions of 1.0 solar constant at air mass zero and 301 K (28 C) were used as a basis for normalization. Beginning-of-life array power was 2230 watts. Currently, the array output is 1830 watts. This corresponds to a 16 percent loss in array performance over 5 years. Comparison of Solar Maximum Telemetry and predicted power levels indicate that array output is 2 percent less than predictions based on an annual 1.0 MeV equivalent election fluence of 2.34 x ten to the 13th power square centimeters space environment
A misleading Wilsonian fixed point
Alexandre, Jean
2007-01-01
We exhibit here, for a scalar theory, an apparently non-trivial Wilsonian fixed point, which surprisingly describes a free theory. This modest note is an observation which can be of interest in the framework of functional methods in Quantum Field Theory.
Nielson, Hanne Riis; Nielson, Flemming
1992-01-01
In the context of abstract interpretation the authors study the number of times a functional needs to be unfolded in order to give the least fixed point. For the cases of total or monotone functions they obtain an exponential bound and in the case of strict and additive (or distributive) functions...
Analog circuit design art, science, and personalities
Williams, Jim
1991-01-01
Analog Circuit Design: Art, Science, and Personalities discusses the many approaches and styles in the practice of analog circuit design. The book is written in an informal yet informative manner, making it easily understandable to those new in the field. The selection covers the definition, history, current practice, and future direction of analog design; the practice proper; and the styles in analog circuit design. The book also includes the problems usually encountered in analog circuit design; approach to feedback loop design; and other different techniques and applications. The text is
Analog and mixed-signal electronics
Stephan, Karl
2015-01-01
A practical guide to analog and mixed-signal electronics, with an emphasis on design problems and applications This book provides an in-depth coverage of essential analog and mixed-signal topics such as power amplifiers, active filters, noise and dynamic range, analog-to-digital and digital-to-analog conversion techniques, phase-locked loops, and switching power supplies. Readers will learn the basics of linear systems, types of nonlinearities and their effects, op-amp circuits, the high-gain analog filter-amplifier, and signal generation. The author uses system design examples to motivate
Practical analog electronics for technicians
Kimber, W A
2013-01-01
'Practical Analog Electronics for Technicians' not only provides an accessible introduction to electronics, but also supplies all the problems and practical activities needed to gain hands-on knowledge and experience. This emphasis on practice is surprisingly unusual in electronics texts, and has already gained Will Kimber popularity through the companion volume, 'Practical Digital Electronics for Technicians'. Written to cover the Advanced GNVQ optional unit in electronics, this book is also ideal for BTEC National, A-level electronics and City & Guilds courses. Together with 'Practical Digit
Classical analogy of Fano resonances
We present an analogy of Fano resonances in quantum interference to classical resonances in the harmonic oscillator system. It has a manifestation as a coupled behaviour of two effective oscillators associated with propagating and evanescent waves. We illustrate this point by considering a classical system of two coupled oscillators and interfering electron waves in a quasi-one-dimensional narrow constriction with a quantum dot. Our approach provides a novel insight into Fano resonance physics and provides a helpful view in teaching Fano resonances
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based on......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus in...
Genetic and structural analysis of the Rhizobium meliloti fixA, fixB, fixC, and fixX genes.
Earl, C D; Ronson, C W; Ausubel, F M
1987-01-01
The fixA, fixB, fixC, and fixX genes of Rhizobium meliloti 1021 constitute an operon and are required for nitrogen fixation in alfalfa nodules. DNA homologous to the R. meliloti fixABC genes is present in all other Rhizobium and Bradyrhizobium species examined, but fixABC-homologous sequences were found in only one free-living diazotroph, Azotobacter vinelandii. To determine whether the fixABCX genes share sequence homology with any of the 17 Klebsiella pneumoniae nif genes, we determined the...
Using Analogical Problem Solving with Different Scaffolding Supports to Learn about Friction
Lin, Shih-Yin
2016-01-01
Prior research suggests that many students believe that the magnitude of the static frictional force is always equal to its maximum value. Here, we examine introductory students' ability to learn from analogical reasoning (with different scaffolding supports provided) between two problems that are similar in terms of the physics principle involved but one problem involves static friction, which often triggers the misleading notion. To help students process through the analogy deeply and contemplate whether the static frictional force was at its maximum value, students in different recitation classrooms received different scaffolding support. We discuss students' performance in different groups.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the
Approaches to synthetic platelet analogs.
Modery-Pawlowski, Christa L; Tian, Lewis L; Pan, Victor; McCrae, Keith R; Mitragotri, Samir; Sen Gupta, Anirban
2013-01-01
Platelet transfusion is routinely used for treating bleeding complications in patients with hematologic or oncologic clotting disorders, chemo/radiotherapy-induced myelosuppression, trauma and surgery. Currently, these transfusions mostly use allogeneic platelet concentrates, while products like lyophilized platelets, cold-stored platelets and infusible platelet membranes are under investigation. These natural platelet-based products pose considerable risks of contamination, resulting in short shelf-life (3-5 days). Recent advances in pathogen reduction technologies have increased shelf-life to ~7 days. Furthermore, natural platelets are short in supply and also cause several biological side effects. Hence, there is significant clinical interest in platelet-mimetic synthetic analogs that can allow long storage-life and minimum side effects. Accordingly, several designs have been studied which decorate synthetic particles with motifs that promote platelet-mimetic adhesion or aggregation. Recent refinement in this design involves combining the adhesion and aggregation functionalities on a single particle platform. Further refinement is being focused on constructing particles that also mimic natural platelet's shape, size and elasticity, to influence margination and wall-interaction. The optimum design of a synthetic platelet analog would require efficient integration of platelet's physico-mechanical properties and biological functionalities. We present a comprehensive review of these approaches and provide our opinion regarding the future directions of this research. PMID:23092864
Pelgrom, Marcel J. M
2013-01-01
This textbook is appropriate for use in graduate-level curricula in analog to digital conversion, as well as for practicing engineers in need of a state-of-the-art reference on data converters. It discusses various analog-to-digital conversion principles, including sampling, quantization, reference generation, nyquist architectures and sigma-delta modulation. This book presents an overview of the state-of-the-art in this field and focuses on issues of optimizing accuracy and speed, while reducing the power level. This new, second edition emphasizes novel calibration concepts, the specific requirements of new systems, the consequences of 45-nm technology and the need for a more statistical approach to accuracy. Pedagogical enhancements to this edition include more than twice the exercises available in the first edition, solved examples to introduce all key, new concepts and warnings, remarks and hints, from a practitioner’s perspective, wherever appropriate. Considerable background information and pr...
Fixed drug eruptions with modafinil
Loknath Ghoshal; Mausumi Sinha
2015-01-01
Modafinil is a psychostimulant drug, which has been approved by the US Food and Drug Administration for the treatment of narcolepsy associated excessive daytime sleepiness, sleep disorder related to shift work, and obstructive sleep apnea syndrome. However, presently it is being used as a lifestyle medicine; in India, it has been misused as an "over the counter" drug. Modafinil is known to have several cutaneous side effects. Fixed drug eruption (FDE) is a distinctive drug induced reaction pa...
Fixed target flammable gas upgrades
In the past, fixed target flammable gas systems were not supported in an organized fashion. The Research Division, Mechanical Support Department began to support these gas systems for the 1995 run. This technical memo describes the new approach being used to supply chamber gasses to fixed target experiments at Fermilab. It describes the engineering design features, system safety, system documentation and performance results. Gas mixtures provide the medium for electron detection in proportional and drift chambers. Usually a mixture of a noble gas and a polyatomic quenching gas is used. Sometimes a small amount of electronegative gas is added as well. The mixture required is a function of the specific chamber design, including working voltage, gain requirements, high rate capability, aging and others. For the 1995 fixed target run all the experiments requested once through gas systems. We obtained a summary of problems from the 1990 fixed target run and made a summary of the operations logbook entries from the 1991 run. These summaries primarily include problems involving flammable gas alarms, but also include incidents where Operations was involved or informed. Usually contamination issues were dealt with by the experimenters. The summaries are attached. We discussed past operational issues with the experimenters involved. There were numerous incidents of drift chamber failure where contaminated gas was suspect. However analyses of the gas at the time usually did not show any particular problems. This could have been because the analysis did not look for the troublesome component, the contaminant was concentrated in the gas over the liquid and vented before the sample was taken, or that contaminants were drawn into the chambers directly through leaks or sub-atmospheric pressures. After some study we were unable to determine specific causes of past contamination problems, although in argon-ethane systems the problems were due to the ethane only
Shapiro, Joel H
2016-01-01
This text provides an introduction to some of the best-known fixed-point theorems, with an emphasis on their interactions with topics in analysis. The level of exposition increases gradually throughout the book, building from a basic requirement of undergraduate proficiency to graduate-level sophistication. Appendices provide an introduction to (or refresher on) some of the prerequisite material and exercises are integrated into the text, contributing to the volume’s ability to be used as a self-contained text. Readers will find the presentation especially useful for independent study or as a supplement to a graduate course in fixed-point theory. The material is split into four parts: the first introduces the Banach Contraction-Mapping Principle and the Brouwer Fixed-Point Theorem, along with a selection of interesting applications; the second focuses on Brouwer’s theorem and its application to John Nash’s work; the third applies Brouwer’s theorem to spaces of infinite dimension; and the fourth rests ...
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Automatic activation of categorical and abstract analogical relations in analogical reasoning.
Green, Adam E; Fugelsang, Jonathan A; Dunbar, Kevin N
2006-10-01
We examined activation of concepts during analogical reasoning. Subjects made either analogical judgments or categorical judgments about four-word sets. After each four-word set, they named the ink color of a single word in a modified Stroop task. Words that referred to category relations were primed (as indicated by longer response times on Stroop color naming) subsequent to analogical judgments and categorical judgments. This finding suggests that activation of category concepts plays a fundamental role in analogical thinking. When colored words referred to analogical relations, priming occurred subsequent to analogical judgments, but not to categorical judgments, even though identical four-word stimuli were used for both types of judgments. This finding lends empirical support to the hypothesis that, when people comprehend the analogy between two items, they activate an abstract analogical relation that is distinct from the specific content items that compose the analogy. PMID:17263066
64 Gbit/s Transmission over 850 m Fixed Wireless Link at 240 GHz Carrier Frequency
Kallfass, Ingmar; Boes, Florian; Messinger, Tobias; Antes, Jochen; Inam, Anns; Lewark, Ulrich; Tessmann, Axel; Henneberger, Ralf
2015-02-01
A directive fixed wireless link operating at a center frequency of 240 GHz achieves a data rate of 64 Gbit/s over a transmission distance of 850 m using QPSK and 8PSK modulation, in a single-channel approach without the use of spatial diversity concepts. The analog transmit and receive frontend consists of active monolithic integrated circuits including broadband RF amplification and quadrature subharmonic mixer channels. The analog frontend is addressed by 64 GSa/s ADC and DAC boards, which are amenable to real-time data transmission. A link budget calculation allows for the estimation of the performance under adverse weather conditions.
QCD analogy for quantum gravity
Holdom, Bob; Ren, Jing
2016-06-01
Quadratic gravity presents us with a renormalizable, asymptotically free theory of quantum gravity. When its couplings grow strong at some scale, as in QCD, then this strong scale sets the Planck mass. QCD has a gluon that does not appear in the physical spectrum. Quadratic gravity has a spin-2 ghost that we conjecture does not appear in the physical spectrum. We discuss how the QCD analogy leads to this conjecture and to the possible emergence of general relativity. Certain aspects of the QCD path integral and its measure are also similar for quadratic gravity. With the addition of the Einstein-Hilbert term, quadratic gravity has a dimensionful parameter that seems to control a quantum phase transition and the size of a mass gap in the strong phase.
Analysis of analogies used by science teachers
Dagher, Zoubeida R.
Science teachers use analogies that display a rich variety of form and content. An account of science teacher analogies that relies solely on systems of analysis imported from other fields of inquiry tends to obscure the unique features of these analogies as they operate within classroom discourse. This study examines teachers' analogies in context and highlights some of their special characteristics. The purpose of this analysis is to increase our understanding of how analogies operate in naturalistic instructional settings and to generate new research questions about science teaching and learning in view of the broader dimensions of the curriculum.Science isa very human activity. It involves human actors and judgements, rivalries and antagonisms, mysteries and surprises, the creative use of metaphor and analogy. It is fallible, often uncertain, and sometimes creatively ambiguous [Lemke, 1990, p. 134].Received: 1 June 1993; Revised: 29 November 1993;
Priming analogical reasoning with false memories.
Howe, Mark L; Garner, Sarah R; Threadgold, Emma; Ball, Linden J
2015-08-01
Like true memories, false memories are capable of priming answers to insight-based problems. Recent research has attempted to extend this paradigm to more advanced problem-solving tasks, including those involving verbal analogical reasoning. However, these experiments are constrained inasmuch as problem solutions could be generated via spreading activation mechanisms (much like false memories themselves) rather than using complex reasoning processes. In three experiments we examined false memory priming of complex analogical reasoning tasks in the absence of simple semantic associations. In Experiment 1, we demonstrated the robustness of false memory priming in analogical reasoning when backward associative strength among the problem terms was eliminated. In Experiments 2a and 2b, we extended these findings by demonstrating priming on newly created homonym analogies that can only be solved by inhibiting semantic associations within the analogy. Overall, the findings of the present experiments provide evidence that the efficacy of false memory priming extends to complex analogical reasoning problems. PMID:25784574
General maximum entropy principle for self-gravitating perfect fluid
We consider a self-gravitating system consisting of perfect fluid with spherical symmetry. Using the general expression of entropy density, we extremize the total entropy S under the constraint that the total number of particles is fixed. We show that extrema of S coincides precisely with the relativistic Tolman-Oppenheimer-Volkoff equation of hydrostatic equilibrium. Furthermore, we apply the maximum entropy principle to a charged perfect fluid and derive the generalized Tolman-Oppenheimer-Volkoff equation. Our work provides strong evidence for the fundamental relationship between general relativity and ordinary thermodynamics.
Analog circuit design scalable analog circuit design, high speed D/A converters, RF power amplifiers
Huijsing, Johan
2007-01-01
Preface. Part I: Scalable Analog Circuit Design. Introduction. Scalable High-Speed Analog Circuit Design; M. Vertregt, P. Scholtens. Scalable High Resolution Mixed Mode Circuit Design; R.J. Brewer. Scalable 'High Voltages' Integrated Circuit Design for XDSL Type of Applications; D. Rossi. Scalability of Wire-Line Analog Front-Ends; K. Bult. Reusable IP Analog Circuit Design; J. Hauptmann, A. Wiesbauer, H. Weinberger. Process Migration Tools for Analog and Digital Circuits; K. Francken, G. Gielen. Part II: High-Speed D/A Converters. Introduction. Introduction to High-Speed Digital-to-Analog Con
Xampling: Compressed Sensing of Analog Signals
Mishali, Moshe; Eldar, Yonina C.
2011-01-01
Xampling generalizes compressed sensing (CS) to reduced-rate sampling of analog signals. A unified framework is introduced for low rate sampling and processing of signals lying in a union of subspaces. Xampling consists of two main blocks: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A variety of analog CS applications are reviewed wi...
Analog to Digital Conversion in Physical Measurements
Kapitaniak, T.; Zyczkowski, K.; Feudel, U.; Grebogi, C.
1999-01-01
There exist measuring devices where an analog input is converted into a digital output. Such converters can have a nonlinear internal dynamics. We show how measurements with such converting devices can be understood using concepts from symbolic dynamics. Our approach is based on a nonlinear one-to-one mapping between the analog input and the digital output of the device. We analyze the Bernoulli shift and the tent map which are realized in specific analog/digital converters. Furthermore, we d...
Robust hyperchaotic synchronization via analog transmission line
Sadoudi, S.; Tanougast, C.
2016-02-01
In this paper, a novel experimental chaotic synchronization technique via analog transmission is discussed. We demonstrate through Field-Programmable Gate Array (FPGA) implementation design the robust synchronization of two embedded hyperchaotic Lorenz generators interconnected with an analog transmission line. The basic idea of this work consists in combining a numerical generation of chaos and transmitting it with an analog signal. The numerical chaos allows to overcome the callback parameter mismatch problem and the analog transmission offers robust data security. As application, this technique can be applied to all families of chaotic systems including time-delayed chaotic systems.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
The maximum drag reduction asymptote
Choueiri, George H.; Hof, Bjorn
2015-11-01
Addition of long chain polymers is one of the most efficient ways to reduce the drag of turbulent flows. Already very low concentration of polymers can lead to a substantial drag and upon further increase of the concentration the drag reduces until it reaches an empirically found limit, the so called maximum drag reduction (MDR) asymptote, which is independent of the type of polymer used. We here carry out a detailed experimental study of the approach to this asymptote for pipe flow. Particular attention is paid to the recently observed state of elasto-inertial turbulence (EIT) which has been reported to occur in polymer solutions at sufficiently high shear. Our results show that upon the approach to MDR Newtonian turbulence becomes marginalized (hibernation) and eventually completely disappears and is replaced by EIT. In particular, spectra of high Reynolds number MDR flows are compared to flows at high shear rates in small diameter tubes where EIT is found at Re Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° [291734].
Vestbo J
2012-09-01
Full Text Available Jørgen VestboUniversity of Manchester, Manchester, UKI read with interest the paper entitled "Diagnosis of airway obstruction in the elderly: contribution of the SARA study" by Sorino et al in a recent issue of this journal.1 Being involved in the Global Initiative for Obstructive Lung Diseases (GOLD, it is nice to see the interest sparked by the GOLD strategy document. However, in the paper by Sorino et al, there are a few misunderstandings around GOLD and the fixed ratio (forced expiratory volume in 1 second/forced volume vital capacity < 0.70 that need clarification.View original paper by Sorino and colleagues.
BRST gauge fixing and regularization
In the presence of consistent regulators, the standard procedure of BRST gauge fixing (or moving from one gauge to another) can require non-trivial modifications. These modifications occur at the quantum level, and gauges exist which are only well-defined when quantum mechanical modifications are correctly taken into account. We illustrate how this phenomenon manifests itself in the solvable case of two-dimensional bosonization in the path-integral formalism. As a by-product, we show how to derive smooth bosonization in Batalin-Vilkovisky Lagrangian BRST quantization. (orig.)
A maximum entropy model for opinions in social groups
Davis, Sergio; Gutiérrez, Gonzalo
2013-01-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Potts-like Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio b...
Maximum-likelihood algorithm for quantum tomography
Optical homodyne tomography is discussed in the context of classical image processing. Analogies between these two fields are traced and used to formulate an iterative numerical algorithm for reconstructing the Wigner function from homodyne statistics. (Author)
TRANSVERSAL SPACES AND FIXED POINT THEOREMS
Sinia N. Ješić; Milan R. Tasković; Nataša Babačev
2007-01-01
In this paper we define Transversal functional probabilistic spaces (upper and lower) as a natural extension of Metric spaces, Probabilistic metric spaces and Fuzzy metric spaces. Also, we formulate and prove some fixed and common fixed point theorems.
Maximum entropy principal for transportation
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Forecasting Maximum Demand And Loadshedding
Dhabai Poonam. B
2014-05-01
Full Text Available The intention of this paper is to priorly estimate the maximum demand (MD during the running slots. The forecasting of MD will help us to save the extra bill charged. The MD is calculated by two methods basically : graphically and mathematically. It will help us to control the total demand, and reduce the effective cost. With help of forecasting MD, we can even perform load shedding if our MD will be exceeding the contract demand (CD. Load shedding is performed as per the load requirement. After load shedding, the MD can be brought under control and hence we can avoid the extra charges which are to be paid under the conditions if our MD exceeds the CD. This scheme is being implemented in various industries. For forecasting the MD we have to consider various zones as: load flow analysis, relay safe operating area (SOA, ratings of the equipments installed, etc. The estimation of MD and load shedding (LS can be also done through automated process such as programming in PLC’s. The automated system is very much required in the industrial zones. This saves the valuable time, as well as the labor work required. The PLC and SCADA software helps a lot in automation technique. To calculate the MD the ratings of each and every equipment installed in the premises is considered. The estimation of MD and LS program will avoid the industries from paying the huge penalties for the electricity companies. This leads to the bright future scope of this concept in the rapid industrialization sector, energy sectors.
Fixed-Target Electron Accelerators
A tremendous amount of scientific insight has been garnered over the past half-century by using particle accelerators to study physical systems of sub-atomic dimensions. These giant instruments begin with particles at rest, then greatly increase their energy of motion, forming a narrow trajectory or beam of particles. In fixed-target accelerators, the particle beam impacts upon a stationary sample or target which contains or produces the sub-atomic system being studied. This is in distinction to colliders, where two beams are produced and are steered into each other so that their constituent particles can collide. The acceleration process always relies on the particle being accelerated having an electric charge; however, both the details of producing the beam and the classes of scientific investigations possible vary widely with the specific type of particle being accelerated. This article discusses fixed-target accelerators which produce beams of electrons, the lightest charged particle. As detailed in the report, the beam energy has a close connection with the size of the physical system studied. Here a useful unit of energy is a GeV, i.e., a giga electron-volt. (ne GeV, the energy an electron would have if accelerated through a billion volts, is equal to 1.6 x 10-10 joules.) To study systems on a distance scale much smaller than an atomic nucleus requires beam energies ranging from a few GeV up to hundreds of GeV and more
Fixed point theory and trace for bicategories
Ponto, Kate
2008-01-01
The Lefschetz fixed point theorem follows easily from the identification of the Lefschetz number with the fixed point index. This identification is a consequence of the functoriality of the trace in symmetric monoidal categories. There are refinements of the Lefschetz number and the fixed point index that give a converse to the Lefschetz fixed point theorem. An important part of this theorem is the identification of these different invariants. We define a generalization of the trace in symmet...
Fixed Point Actions for Lattice Fermions
Bietenholz, W.; Wiese, U. -J.
1993-01-01
The fixed point actions for Wilson and staggered lattice fermions are determined by iterating renormalization group transformations. In both cases a line of fixed points is found. Some points have very local fixed point actions. They can be used to construct perfect lattice actions for asymptotically free fermionic theories like QCD or the Gross-Neveu model. The local fixed point actions for Wilson fermions break chiral symmetry, while in the staggered case the remnant $U(1)_e \\otimes U(1)_o$...
Novel Analog For Muscle Deconditioning
Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob
2011-01-01
Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.
An Analog Computer for Electronic Engineering Education
Fitch, A. L.; Iu, H. H. C.; Lu, D. D. C.
2011-01-01
This paper describes a compact analog computer and proposes its use in electronic engineering teaching laboratories to develop student understanding of applications in analog electronics, electronic components, engineering mathematics, control engineering, safe laboratory and workshop practices, circuit construction, testing, and maintenance. The…
Analogical Processes and College Developmental Reading
Paulson, Eric J.
2014-01-01
Although a solid body of research concerning the role of analogies in reading processes has emerged at a variety of age groups and reading proficiencies, few of those studies have focused on analogy use by readers enrolled in college developmental reading courses. The current study explores whether 232 students enrolled in mandatory (by placement…
Several Forms of Fuzzy Analogical Reasoning
Bouchon-Meunier, B; Delechamp, J.; Marsala, C.; Rifqi, M.
1997-01-01
We present a general framework representing analogy, on the basis of a link between variables and measures of comparison between values of variables. This analogical scheme is proven to represent a common description of several forms of reasoning used in fuzzy control or in the management of knowledge-based systems, such as deductive reasoning, inductive reasoning or prototypical reasoning, gradual reasoning.
A physical analogy to fuzzy clustering
Jantzen, Jan
2004-01-01
This tutorial paper provides an interpretation of the membership assignment in the fuzzy clustering algorithm fuzzy c-means. The membership of a data point to several clusters is shown to be analogous to the gravitational forces between bodies of mass. This provides an alternative way to explain...... the algorithm to students. The analogy suggests a possible extension of the fuzzy membership assignment equation....
Novel Gemini vitamin D3 analogs
Okamoto, Ryoko; Gery, Sigal; Kuwayama, Yoshio;
2014-01-01
We have synthesized 39 1,25-dihydroxyvitamin D3 [1,25(OH)2D3] analogs having two side chains attached to carbon-20 (Gemini) with various modifications and compared their anticancer activities. Five structure-function rules emerged to identify analogs with enhanced anticancer activity. One of thes...
Analog simulation of the Josephson effects
Analog circuit techniques can be used to advantage to simulate the Josephson effects in a superconductor-insulator-superconductor tunnel junction. Details of an electronic Josephson simulator are presented, and the advantages of analog techniques over their digital counterparts for this application are discussed. The simulation of a Josephson microwave mixer is used as an example
Young Children's Analogical Reasoning in Science Domains
Haglund, Jesper; Jeppsson, Fredrik; Andersson, Johanna
2012-01-01
This exploratory study in a classroom setting investigates first graders' (age 7-8 years, N = 25) ability to perform analogical reasoning and create their own analogies for two irreversible natural phenomena: mixing and heat transfer. We found that the children who contributed actively to a full-class discussion were consistently successful at…
Analogies in high school Brazilian chemistry textbooks
Rosária Justi
2000-05-01
Full Text Available This paper presents and discusses an analysis of the analogies presented by Brazilian chemistry textbooks for the medium level. The main aim of the analysis is to discuss whether such analogies can be said good teaching models. From the results, some aspects concerning with teachers' role are discussed. Finally, some new research questions are emphasised.
Play with Language: Overextensions as Analogies.
Hudson, Judith; Nelson, Katherine
1984-01-01
Defines criteria to identify children's language overextensions and investigates how young children in the early stages of language acquisition rename objects analogically during a standardized play situation. Results indicate that analogic extensions are well within the capabilities of children from one year, eight months to two years, four…
Maximum likelihood decay curve fits by the simplex method
A multicomponent decay curve analysis technique has been developed and incorporated into the decay curve fitting computer code, MLDS (maximum likelihood decay by the simplex method). The fitting criteria are based on the maximum likelihood technique for decay curves made up of time binned events. The probabilities used in the likelihood functions are based on the Poisson distribution, so decay curves constructed from a small number of events are treated correctly. A simple utility is included which allows the use of discrete event times, rather than time-binned data, to make maximum use of the decay information. The search for the maximum in the multidimensional likelihood surface for multi-component fits is performed by the simplex method, which makes the success of the iterative fits extremely insensitive to the initial values of the fit parameters and eliminates the problems of divergence. The simplex method also avoids the problem of programming the partial derivatives of the decay curves with respect to all the variable parameters, which makes the implementation of new types of decay curves straightforward. Any of the decay curve parameters can be fixed or allowed to vary. Asymmetric error limits for each of the free parameters, which do not consider the covariance of the other free parameters, are determined. A procedure is presented for determining the error limits which contain the associated covariances. The curve fitting procedure in MLDS can easily be adapted for fits to other curves with any functional form. (orig.)
Computational approaches to analogical reasoning current trends
Richard, Gilles
2014-01-01
Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...
Enhancement of naked FIX minigene expression by chloroquine in mice
Hong-yan CHEN; Huan-zhang ZHU; Bin LU; Xuan XU; Ji-hua YAO; Qi SHEN; Jing-lun XUE
2004-01-01
AIM: To study the effect of chloroquine on the expression of human clotting factor IX (hFIX) in mice. METHODS:Hydrodynamics-based naked DNA plasmid administration was performed by tail vein injection of 10 μg of pCMVhFIX and chloroquine (0, 100, 200, and 500 μmol/L) in 2.2 mL of Ringer' solution within 6-7 s, the level and stability of hFIX expression, liver damage and toxicity were then examined. RESULTS: The maximum expression of hFIX level was 4.4±-1.8 mg/L at 8 h after injection, 9.7±1.6 mg/L at 24 h only existed in 200 μmol/L chloroquinetreated animals, which is 3-4 fold higher than that of control (P＜0.01). There is no significant difference observed among all the treated groups, 3 d later. Transaminase level and liver histological study showed the damage of liver was not related to chloroquine (P＞0.05). CONCLUSION: Chloroquine can enhance and sustain exogenous gene expression in vivo without side effect under our experimental conditions.
Iterative approximation of fixed points
Berinde, Vasile
2007-01-01
The aim of this monograph is to give a unified introductory treatment of the most important iterative methods for constructing fixed points of nonlinear contractive type mappings. It summarizes the most significant contributions in the area by presenting, for each iterative method considered (Picard iteration, Krasnoselskij iteration, Mann iteration, Ishikawa iteration etc.), some of the most relevant, interesting, representative and actual convergence theorems. Applications to the solution of nonlinear operator equations as well as the appropriate error analysis of the main iterative methods, are also presented. Due to the explosive number of research papers on the topic (in the last 15 years only, more than one thousand articles related to the subject were published), it was felt that such a monograph was imperatively necessary. The volume is useful for authors, editors, and reviewers. It introduces concrete criteria for evaluating and judging the plethora of published papers.
Analog regulation of metabolic demand
Muskhelishvili Georgi
2011-03-01
Full Text Available Abstract Background The 3D structure of the chromosome of the model organism Escherichia coli is one key component of its gene regulatory machinery. This type of regulation mediated by topological transitions of the chromosomal DNA can be thought of as an analog control, complementing the digital control, i.e. the network of regulation mediated by dedicated transcription factors. It is known that alterations in the superhelical density of chromosomal DNA lead to a rich pattern of differential expressed genes. Using a network approach, we analyze these expression changes for wild type E. coli and mutants lacking nucleoid associated proteins (NAPs from a metabolic and transcriptional regulatory network perspective. Results We find a significantly higher correspondence between gene expression and metabolism for the wild type expression changes compared to mutants in NAPs, indicating that supercoiling induces meaningful metabolic adjustments. As soon as the underlying regulatory machinery is impeded (as for the NAP mutants, this coherence between expression changes and the metabolic network is substantially reduced. This effect is even more pronounced, when we compute a wild type metabolic flux distribution using flux balance analysis and restrict our analysis to active reactions. Furthermore, we are able to show that the regulatory control exhibited by DNA supercoiling is not mediated by the transcriptional regulatory network (TRN, as the consistency of the expression changes with the TRN logic of activation and suppression is strongly reduced in the wild type in comparison to the mutants. Conclusions So far, the rich patterns of gene expression changes induced by alterations of the superhelical density of chromosomal DNA have been difficult to interpret. Here we characterize the effective networks formed by supercoiling-induced gene expression changes mapped onto reconstructions of E. coli's metabolic and transcriptional regulatory network. Our
Advances in Analog Circuit Design 2015
Baschirotto, Andrea; Harpe, Pieter
2016-01-01
This book is based on the 18 tutorials presented during the 24th workshop on Advances in Analog Circuit Design. Expert designers present readers with information about a variety of topics at the frontier of analog circuit design, including low-power and energy-efficient analog electronics, with specific contributions focusing on the design of efficient sensor interfaces and low-power RF systems. This book serves as a valuable reference to the state-of-the-art, for anyone involved in analog circuit research and development. · Provides a state-of-the-art reference in analog circuit design, written by experts from industry and academia; · Presents material in a tutorial-based format; · Includes coverage of high-performance analog-to-digital and digital to analog converters, integrated circuit design in scaled technologies, and time-domain signal processing.
Fostering Multilateral Involvement in Analog Research
Cromwell, Ronita L.
2015-01-01
International collaboration in space flight research is an effective means for conducting investigations and utilizing limited resources to the fullest extent. Through these multilateral collaborations mutual research questions can be investigated and resources contributed by each international partner to maximize the scientific benefits to all parties. Recently the international partners embraced this approach to initiate collaborations in ground-based space flight analog environments. In 2011, the International Analog Research Working Group was established, and later named the International Human Space Flight Analog Research Coordination Group (HANA). Among the goals of this working group are to 1) establish a framework to coordinate research campaigns, as appropriate, to minimize duplication of effort and enhance synergy; 2) define what analogs are best to use for collaborative interests; and 3) facilitate interaction between discipline experts in order to have the full benefit of international expertise. To accomplish these goals, HANA is currently engaged in developing international research campaigns in ground-based analogs. Plans are being made for an international solicitation for proposals to address research of common interest to all international partners. This solicitation with identify an analog environment that will best accommodate the types of investigations requested. Once selected, studies will be integrated into a campaign and implemented at the analog site. Through these combined efforts, research beneficial to all partners will be conducted efficiently to further address human risks of space exploration.
Analogies: Explanatory Tools in Web-Based Science Instruction
Glynn, Shawn M.; Taasoobshirazi, Gita; Fowler, Shawn
2007-01-01
This article helps designers of Web-based science instruction construct analogies that are as effective as those used in classrooms by exemplary science teachers. First, the authors explain what analogies are, how analogies foster learning, and what form analogies should take. Second, they discuss science teachers' use of analogies. Third, they…
Kleinfelder, Stuart; Hottes, Alison; Pease, R. Fabian W.
2000-07-01
A pixel array readout integrated circuit (ROIC) containing per-pixel analog-to-digital conversion (ADC) and digital-to- analog conversion (DAC) for infrared detectors is presented with design and test result details. Fabricated in a standard 0.35 micron, 3.3 volt CMOS technology. the prototype consists of a linear array of 64 pixels, containing over 100 transistors per 30 by 30 micron pixel. The 8-bit per-pixel ADC is a Nyquist-rate single-slope design consisting of a three stage comparator and an 8 bit memory. This fully pixel- parallel ADC architecture operates in full-frame 'snapshot' mode and can reach over 1,000 frames per second. Each pixel also contains cascoded current source, globally biased to subtract an identical, fixed amount of current from each pixel in order to remove a common background signal by 'charge skimming.' It operates over more than 3 decades of current cancellation (approximately 10 pA to > 10 nA). As well, each pixel contains a 4 to 6+ bit current-mode DAC, intended to trim-out pixel-to-pixel variations in background current. It consists of 16 unit-cells of switched cascoded current sources per pixel, organized as two separately biased weights and controlled by a 16-bit per-pixel memory. The DAC operates over more than 4 decades of current cancellation (< 10 pA to approximately equals 100 nA) per least significant bit (LSB).
Photonic analog computing with integrated silicon waveguides
Dong, Jianji; Zheng, Aoleng; Zhang, Xinliang
2014-01-01
The spectra of silicon integrated waveguides are tailored to process analog computing (i.e.,differential and integral) in optical domain with huge bandwidth.With the theory of signal and system, we design some silicon integrated devices to implement photonic differentiator and optical differential equation solver. The basic principle is to tailor the spectra of silicon integrated waveguides to meet the requirements of analog computing circuits. These analog photonic integrated circuits are very promising in future computing systems with high speed, low cost, and compact size. We also plan to employ these basic computing units in more complex computing modules.
Fixed bed nuclear reactor concept
Full text: The fixed bed nuclear reactor (FBNR) is essentially a pressurized light water reactor (PWR) having spherical fuel elements constituting a suspended reactor core at its lowest bed porosity. The core is movable thus under any adverse condition, the fuel elements can leave the reactor core naturally through the force of gravity and fall into the passively cooled fuel chamber or leave the reactor all together entering the spent fuel pool. It is a small and modular reactor being simple in design. Its spent fuel is in such a convenient form and size that may be utilized directly as the source for irradiation and applications in agriculture and industry. This feature results in a positive impact on waste management and environmental protection. The principle features of the proposed reactor are that the concept is polyvalent, simple in design, may operate either as fixed or fluidized bed, have the core suspended contributing to inherent safety, passive cooling features of the reactor. The reactor is modular and has integrated primary system utilizing either water, supercritical steam or helium gas as its coolant. Some of the advantages of the proposed reactor are being modular, low environmental impact, exclusion of severe accidents, short construction period, flexible adaptation to demand, excellent load following characteristics, and competitive economics. The characteristics of the Fluidized Bed Nuclear Reactor (FBNR) concept may be analyzed under the light of the requirements set for the IV generation nuclear reactors. It is shown that FBNR meet the goals of (1) Providing sustainable energy generation that meets clean air objectives and promotes long-term availability of systems and effective fuel utilization for worldwide energy production, (2) Minimize and manage their nuclear waste and notably reduce the long term stewardship burden in the future, thereby improving protection for the public health and the environment, (3) Excel in safety and reliability
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Split-panel jackknife estimation of fixed-effect models
DHAENE, Geert; Jochmans, Koen
2010-01-01
We propose a jackknife for reducing the order of the bias of maximum likelihood estimates of nonlinear dynamic fixed-effect panel models. In its simplest form, the half-panel jackknife, the estimator is just $2\\hat{\\theta} - \\bar{\\theta}_{1/2}$, where $\\hat{\\theta}$ is the MLE from the full panel and $\\bar{\\theta}_{1/2}$ is the average of the two half-panel MLEs, each using T/2 time periods and all N cross-sectional units. This estimator eliminates the first-order bias of $\\hat{theta}. The o...
Split-panel jackknife estimation of fixed-effect models
DHAENE, Geert; Jochmans, Koen
2010-01-01
We propose a jackknife for reducing the order of the bias of maximum likelihood estimates of nonlinear dynamic fixed-effect panel models. In its simplest form, the half-panel jackknife, the estimatorisjust 2θˆ−θ1/2,where θˆ!istheMLEfromthefullpaneland θ1/2 istheaverageofthe two half-panel MLEs, each using T/2 time periods and all N cross-sectional units. This estimatoreliminates the first-order bias of θˆ . The order of the bias is further reduced if two partitions of the panel are used, for ...
In situ measurement of fixed charge evolution at silicon surfaces during atomic layer deposition
Interfacial fixed charge or interfacial dipoles are present at many semiconductor-dielectric interfaces and have important effects upon device behavior, yet the chemical origins of these electrostatic phenomena are not fully understood. We report the measurement of changes in Si channel conduction in situ during atomic layer deposition (ALD) of aluminum oxide using trimethylaluminum and water to probe changes in surface electrostatics. Current-voltage data were acquired continually before, during, and after the self-limiting chemical reactions that result in film growth. Our measurements indicated an increase in conductance on p-type samples with p+ ohmic contacts and a decrease in conductance on analogous n-type samples. Further, p+ contacted samples with n-type channels exhibited an increase in measured current and n+ contacted p-type samples exhibited a decrease in current under applied voltage. Device physics simulations, where a fixed surface charge was parameterized on the channel surface, connect the surface charge to changes in current-voltage behavior. The simulations and analogous analytical relationships for near-surface conductance were used to explain the experimental results. Specifically, the changes in current-voltage behavior can be attributed to the formation of a fixed negative charge or the modification of a surface dipole upon chemisorption of trimethylaluminum. These measurements allow for the observation of fixed charge or dipole formation during ALD and provide further insight into the electrostatic behavior at semiconductor-dielectric interfaces during film nucleation
In situ measurement of fixed charge evolution at silicon surfaces during atomic layer deposition
Ju, Ling; Watt, Morgan R.; Strandwitz, Nicholas C., E-mail: strand@lehigh.edu [Department of Materials Science and Engineering and Center for Advanced Materials and Nanotechnology, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)
2015-02-09
Interfacial fixed charge or interfacial dipoles are present at many semiconductor-dielectric interfaces and have important effects upon device behavior, yet the chemical origins of these electrostatic phenomena are not fully understood. We report the measurement of changes in Si channel conduction in situ during atomic layer deposition (ALD) of aluminum oxide using trimethylaluminum and water to probe changes in surface electrostatics. Current-voltage data were acquired continually before, during, and after the self-limiting chemical reactions that result in film growth. Our measurements indicated an increase in conductance on p-type samples with p{sup +} ohmic contacts and a decrease in conductance on analogous n-type samples. Further, p{sup +} contacted samples with n-type channels exhibited an increase in measured current and n{sup +} contacted p-type samples exhibited a decrease in current under applied voltage. Device physics simulations, where a fixed surface charge was parameterized on the channel surface, connect the surface charge to changes in current-voltage behavior. The simulations and analogous analytical relationships for near-surface conductance were used to explain the experimental results. Specifically, the changes in current-voltage behavior can be attributed to the formation of a fixed negative charge or the modification of a surface dipole upon chemisorption of trimethylaluminum. These measurements allow for the observation of fixed charge or dipole formation during ALD and provide further insight into the electrostatic behavior at semiconductor-dielectric interfaces during film nucleation.
Fixed drug eruptions with modafinil
Loknath Ghoshal
2015-01-01
Full Text Available Modafinil is a psychostimulant drug, which has been approved by the US Food and Drug Administration for the treatment of narcolepsy associated excessive daytime sleepiness, sleep disorder related to shift work, and obstructive sleep apnea syndrome. However, presently it is being used as a lifestyle medicine; in India, it has been misused as an "over the counter" drug. Modafinil is known to have several cutaneous side effects. Fixed drug eruption (FDE is a distinctive drug induced reaction pattern characterized by recurrence of eruption at the same site of the skin or mucous membrane with repeated systemic administration. Only two case reports exist in the literature describing modafinil induced FDE until date. Here, we report two similar cases. The increasing use of this class of drug amongst the medical personnel might be posing a threat to the proper use and encouraging subsequent abuse. There might be a considerable population using these drugs unaware of the possible adverse effects. Authorities should be more alert regarding the sale and distribution of such medicines.
Fixed telephony evolution at CERN
CERN. Geneva
2015-01-01
The heart of CERN’s telephony infrastructure consists of the Alcatel IP-PBX that links CERN’s fixed line phones, Lync softphones and CERN’s GSM subscribers to low-cost local and international telephony services. The PABX infrastructure also supports the emergency “red telephones” in the LHC tunnel and provides vital services for the Fire and Rescue Service and the CERN Control Centre. Although still reliable, the Alcatel hardware is increasingly costly to maintain and looking increasingly outmoded in a market where open source solutions are increasingly dominant. After presenting an overview of the Alcatel PABX and the services it provides, including innovative solutions such as the Closed User Group for our mobile telephony services, we present a possible architecture for a software based system designed to meet tomorrow’s communication needs and describe how the introduction of open-source call routers based on the SIP protocol and Session Border Controllers (SBC) could foster the introduction...
Dr. Weinberg discusses some of the history of various aspects of the nuclear industry in light of the accident at Three Mile Island. The siting of commercial nuclear power plants near population centers is a result of opening up nuclear operations at the private sector. Early reactors established as a part of the Manhattan Project were all remote from unknowledgeable populations. When the utilities began to construct nuclear plants, their siting tended to conform to the practices already established for generation plants. With the nearness of nuclear power plants to people, and with media coverage of accidents so widespread, the public perception of risk associated with nuclear energy deserves attention. The primary issue concerning nuclear power plants is, according to Dr. Weinberg, the 15 billion curies in an operating reactor and the possibility of their release. He identifies six characteristics necessary for an acceptable nuclear energy system: technical fixes; physical isolation; separation of generation and distribution; professionalization of the nuclear cadre; heightened security; and, perhaps most difficult, public education about the hazards of radiation. The major alternatives to fission - geothermal, fusion, fossil, and the various forms of solar energy - are discussed briefly
Photoresistance analog multiplier has wide range
Hartenstein, R. G.
1965-01-01
Photoactivated bridge facilitates equal performance of analog multipliers over a wide frequency range. The multiplier operates from direct current to an upper frequency limited by either the light source or the closed-loop amplifier.
The Analog (Computer) As a Physiology Adjunct.
Stewart, Peter A.
1979-01-01
Defines and discusses the analog computer and its use in a physiology laboratory. Includes two examples: (1) The Respiratory Control Function and (2) CO-Two Control in the Respiratory System. Presents diagrams and mathematical models. (MA)
An Electrical Analog Computer for Poets
Bruels, Mark C.
1972-01-01
Nonphysics majors are presented with a direct current experiment beyond Ohms law and series and parallel laws. This involves construction of an analog computer from common rheostats and student-assembled voltmeters. (Author/TS)
Optical analog-to-digital converter
Vawter, G. Allen; Raring, James; Skogen, Erik J.
2009-07-21
An optical analog-to-digital converter (ADC) is disclosed which converts an input optical analog signal to an output optical digital signal at a sampling rate defined by a sampling optical signal. Each bit of the digital representation is separately determined using an optical waveguide interferometer and an optical thresholding element. The interferometer uses the optical analog signal and the sampling optical signal to generate a sinusoidally-varying output signal using cross-phase-modulation (XPM) or a photocurrent generated from the optical analog signal. The sinusoidally-varying output signal is then digitized by the thresholding element, which includes a saturable absorber or at least one semiconductor optical amplifier, to form the optical digital signal which can be output either in parallel or serially.
Analog optical transmission of fast photomultiplier pulses over distances of 2 km
Karle, A; Cichos, S; Hundertmark, S; Pandel, D; Spiering, C; Streicher, O; Thon, T; Wiebusch, C; Wischnewski, R
1997-01-01
New LED-transmitters have been used to develop a new method of fast analog transmission of PMT pulses over large distances. The transmitters, consisting basically of InGaAsP LEDs with the maximum emission of light at 1300 nm, allow the transmission of fast photomultiplier pulses over distances of more than 2 km. The shape of the photomultiplier pulses is maintained, with an attenuation less than 1 dB/km. Typical applications of analog optical signal transmission are surface air shower detectors and underwater/ice neutrino experiments, which measure fast Cherenkov or scintillator pulses at large detector distances to the central DAQ system.
Fixed Point Theory and the Ulam Stability
Brzdęk, Janusz; Cădariu, Liviu; Ciepliński, Krzysztof
2014-01-01
The fixed point method has been applied for the first time, in proving the stability results for functional equations, by Baker (1991); he used a variant of Banach's fixed point theorem to obtain the stability of a functional equation in a single variable. However, most authors follow the approaches involving a theorem of Diaz and Margolis. The main aim of this survey is to present applications of different fixed point theorems to the theory of stability of functional equations, motivated by ...
Ryoko Oki
2015-01-01
This study considers the anti-competitive effect of fixed-fee pricing, such as the one seen in a recent antitrust case in Japan. We show that fixed-fee pricing has stronger exclusionary effect than the per-use pricing's exclusionary effect. However, the restriction on usage of fixed-fee pricing may have a welfare-decreasing effect, although the restriction promotes entry.
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
Implementing neural architectures using analog VLSI circuits
Maher, Mary Ann C.; DeWeerth, Stephen P.; Mahowald, Misha A.; Mead, Carver A.
1989-01-01
Analog very large-scale integrated (VLSI) technology can be used not only to study and simulate biological systems, but also to emulate them in designing artificial sensory systems. A methodology for building these systems in CMOS VLSI technology has been developed using analog micropower circuit elements that can be hierarchically combined. Using this methodology, experimental VLSI chips of visual and motor subsystems have been designed and fabricated. These chips exhibit behavior similar to...
Pigeons, Rats, and Humans Show Analogous Misinformation
Garry, Maryanne; Harper, David N
2009-01-01
In three experiments, we show that pigeons, rats and humans can be influenced by misleading postevent information in ways analogous to findings in the human memory distortion literature. We used a delayed matching to sample analog of the eyewitness testimony procedure from Loftus et al.(1978), and varied the length of the delay between event and exposure to post event information(PEI). We also varied the nature of PEI so that it was consistent with the event information, inconsistent, or neut...
Analog VLSI model of binaural hearing
Mead, Carver A.; Arreguit, Xavier; Lazzaro, John
1991-01-01
The stereausis model of biological auditory processing was proposed as a representation that encodes both binaural and spectral information in a unified framework. We describe a working analog VLSI chip that implements this model of early auditory processing in the brain. The chip is a 100 000-transistor integrated circuit that computes the stereausis representation in real time, using continuous-time analog processing. The chip receives two audio inputs, representing sou...