Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit;
2014-01-01
to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...
Takeda, K.; Ochiai, H.; Takeuchi, S.
1985-01-01
Maximum snow water equivalence and snowcover distribution are estimated using several LANDSAT data taken in snowmelting season over a four year period. The test site is Okutadami-gawa Basin located in the central position of Tohoku-Kanto-Chubu District. The year to year normalization for snowmelt volume computation on the snow line is conducted by year to year correction of degree days using the snowcover percentage within the test basin obtained from LANDSAT data. The maximum snow water equivalent map in the test basin is generated based on the normalized snowmelt volume on the snow line extracted from four LANDSAT data taken in a different year. The snowcover distribution on an arbitrary day in snowmelting of 1982 is estimated from the maximum snow water equivalent map. The estimated snowcover is compared with the snowcover area extracted from NOAA-AVHRR data taken on the same day. The applicability of the snow estimation using LANDSAT data is discussed.
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)
2014-08-15
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
Hodder, Joanne N; Keir, Peter J
2013-10-01
Muscle specific maximal voluntary isometric contractions (MVIC) are commonly used to elicit reference amplitudes to normalize electromyographic signals (EMG). It has been questioned whether this is appropriate for normalizing EMG from dynamic contractions. This study compares EMG amplitude when shoulder muscle activity from dynamic contractions is normalized to isometric and isokinetic maximal excitation as well as a hybrid approach currently used in our laboratory. Anterior, middle and posterior deltoid, upper and lower trapezius, pectoralis major, latissimus dorsi and infraspinatus were monitored during (1) manually resisted MVICs, and (2) maximum voluntary dynamic concentric contractions (MVDC) on an isokinetic dynamometer. Dynamic contractions were performed (a) at 30°/s about the longitudinal, frontal and sagittal axes of the shoulder, and (b) during manual bi-rotation of a tilted wheel at 120°/s. EMG from the wheel task was normalized to the maximum excitation from (i) the muscle specific MVIC, (ii) from any MVIC (MVICALL), (iii) for any MVDC, (iv) from any exertion (maximum experimental excitation, MEE). Mean EMG from the wheel task was up to 45% greater when normalized to muscle specific isometric contractions (method i) than when normalized to MEE (method iv). Seventy-five percent of MEE's occurred during MVDCs. This study presents an 20 useful and effective process for obtaining the greatest excitation from the shoulder muscles when normalizing dynamic efforts.
The use of response surface analysis in obtaining maximum profit in oil palm industry
Ahmad Tarmizi Mohammed
2006-03-01
Full Text Available This study was conducted to show how to use Response Surface Analysis in obtaining the optimum level of fertilizer needs by oil palm. The ridge analysis was proposed to overcome the saddle point problem. Data from Malaysian Palm Oil Board database was analyzed. The fertilizers considered are N, P, K and Mg. The results from ridge analysis provided several alternatives of the fertilizer combination. Profit analysis was then applied to determine the best combination of fertilizers needed by the oil palm in order to generate maximum profit. It is found that N and K fertilizers were the important fertilizers required by the oil palm. It is also found that the N and K nutrient concentrations of the foliar nutrient composition were higher compared to other nutrients. Three different stations were considered and it was found that the fertilizersneeded by the oil palm and foliar nutrient composition were different at the different type of soil series.
Berkeley Supernova Ia Program II: Initial Analysis of Spectra Obtained Near Maximum Brightness
Silverman, Jeffrey M; Filippenko, Alexei V
2012-01-01
In this second paper in a series we present measurements of spectral features of 432 low-redshift (z < 0.1) optical spectra of 261 Type Ia supernovae (SNe Ia) within 20 d of maximum brightness. The data were obtained from 1989 through the end of 2008 as part of the Berkeley SN Ia Program (BSNIP) and are presented in BSNIP I (Silverman et al., submitted). We describe in detail our method of automated, robust spectral feature definition and measurement which expands upon similar previous studies. Using this procedure, we attempt to measure expansion velocities, pseudo-equivalent widths (pEW), spectral feature depths, and fluxes at the centre and endpoints of each of nine major spectral feature complexes. A sanity check of the consistency of our measurements is performed using our data (as well as a separate spectral dataset). We investigate how velocity and pEW evolve with time and how they correlate with each other. Various spectral classification schemes are employed and quantitative spectral differences a...
Voltolin, Tatiana Aparecida; Laudicina, Alejandro; Senhorini, José Augusto; Bortolozzi, Jehud; Oliveira, Cláudio; Foresti, Fausto; Porto-Foresti, Fábio
2010-12-01
In Prochilodus lineatus B-chromosomes are visualized as reduced size extra elements identified as microchromosomes and are variable in morphology and number. We describe the specific total probe (B-chromosome probe) in P. lineatus obtained by chromosome microdissection and a whole genomic probe (genomic probe) from an individual without B-chromosome. The specific B-chromosome was scraped and processed to obtain DNA with amplification by DOP-PCR, and so did the genomic probe DNA. Fluorescence in situ hybridization using the B-chromosome probe labeled with dUTP-Tetramethyl-rhodamine and the genomic probe labeled with digoxigenin-FITC permitted to establish that in this species supernumerary chromosomes with varying number and morphology had different structure of chromatin when compared to that of the regular chromosomes or A complement, since only these extra elements were labeled in the metaphases. The present findings suggest that modifications in the chromatin structure of B-chromosomes to differentiate them from the A chromosomes could occur along their dispersion in the individuals of the population.
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
The use of response surface analysis in obtaining maximum profit in oil palm industry
Ahmad Tarmizi Mohammed; Khalid Haron; Zuhaimy Ismail; Azme Khamis
2006-01-01
This study was conducted to show how to use Response Surface Analysis in obtaining the optimum level of fertilizer needs by oil palm. The ridge analysis was proposed to overcome the saddle point problem. Data from Malaysian Palm Oil Board database was analyzed. The fertilizers considered are N, P, K and Mg. The results from ridge analysis provided several alternatives of the fertilizer combination. Profit analysis was then applied to determine the best combination of fertilizers needed by the...
Im, Jooeun; Kim, Mihyun; Choi, Ki-Sun; Hwang, Tae-Kyung; Kwon, Il-Bum
2014-06-10
In this paper, new fiber Bragg grating (FBG) sensor probes are designed to intermittently detect the maximum tensile strain of composite materials, so as to evaluate the structural health status. This probe is fabricated by two thin Al films bonded to an FBG optical fiber and two supporting brackets, which are fixed on the surface of composite materials. The residual strain of the Al packaged FBG sensor probe is induced by the strain of composite materials. This residual strain can indicate the maximum strain of composite materials. Two types of sensor probes are prepared-one is an FBG with 18 μm thick Al films, and the other is an FBG with 36 μm thick Al films-to compare the thickness effect on the detection sensitivity. These sensor probes are bonded on the surfaces of carbon fiber reinforced plastics composite specimens. In order to determine the strain sensitivity between the residual strain of the FBG sensor probe and the maximum strain of the composite specimen, tensile tests are performed by universal testing machine, under the loading-unloading test condition. The strain sensitivities of the probes, which have the Al thicknesses of 18 and 36 μm, are determined as 0.13 and 0.23, respectively.
Combining ligation reaction and capillary gel electrophoresis to obtain reliable long DNA probes.
García-Cañas, Virginia; Mondello, Monica; Cifuentes, Alejandro
2011-05-01
New DNA amplification methods are continuously developed for sensitive detection and quantification of specific DNA target sequences for, e.g. clinical, environmental or food applications. These new applications often require the use of long DNA oligonucleotides as probes for target sequences hybridization. Depending on the molecular technique, the length of DNA probes ranges from 40 to 450 nucleotides, solid-phase chemical synthesis being the strategy generally used for their production. However, the fidelity of chemical synthesis of DNA decreases for larger DNA probes. Defects in the oligonucleotide sequence result in the loss of hybridization efficiency, affecting the sensitivity and selectivity of the amplification method. In this work, an enzymatic procedure has been developed as an alternative to solid-phase chemical synthesis for the production of long oligonucleotides. The enzymatic procedure for probe production was based on ligation of short DNA sequences. Long DNA probes were obtained from smaller oligonucleotides together with a short sequence that acts as bridge stabilizing the molecular complex for DNA ligation. The ligation reactions were monitored by capillary gel electrophoresis with laser-induced fluorescence detection (CGE-LIF) using a bare fused-silica capillary. The capillary gel electrophoresis-LIF method demonstrated to be very useful and informative for the characterization of the ligation reaction, providing important information about the nature of some impurities, as well as for the fine optimization of the ligation conditions (i.e. ligation cycles, oligonucleotide and enzyme concentration). As a result, the yield and quality of the ligation product were highly improved. The in-lab prepared DNA probes were used in a novel multiplex ligation-dependent genome amplification (MLGA) method for the detection of genetically modified maize in samples. The great possibilities of the whole approach were demonstrated by the specific and sensitive
Hernandez, P. [Lawrence Berkeley Lab., CA (United States)
1995-02-01
This paper is an expansion of engineering notes prepared in 1961 to address the question of how to wind circular coils so as to obtain the maximum axial field with the minimum volume of conductor. At the time this was a germain question because of the advent of superconducting wires which were in very limited supply, and the rapid push for generation of very high fields, with little concern for uniformity.
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
Maximum-Likelihood Sequence Detector for Dynamic Mode High Density Probe Storage
Kumar, Naveen; Ramamoorthy, Aditya; Salapaka, Murti
2009-01-01
There is an ever increasing need for storing data in smaller and smaller form factors driven by the ubiquitous use and increased demands of consumer electronics. A new approach of achieving a few Tb per in2 areal densities, utilizes a cantilever probe with a sharp tip that can be used to deform and assess the topography of the material. The information may be encoded by means of topographic profiles on a polymer medium. The prevalent mode of using the cantilever probe is the static mode that is known to be harsh on the probe and the media. In this paper, the high quality factor dynamic mode operation, which is known to be less harsh on the media and the probe, is analyzed for probe based high density data storage purposes. It is demonstrated that an appropriate level of abstraction is possible that obviates the need for an involved physical model. The read operation is modeled as a communication channel which incorporates the inherent system memory due to the intersymbol interference and the cantilever state ...
Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)
2015-09-15
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
Probing Ionic Liquid Aqueous Solutions Using Temperature of Maximum Density Isotope Effects
Mohammad Tariq
2013-03-01
Full Text Available This work is a new development of an extensive research program that is investigating for the first time shifts in the temperature of maximum density (TMD of aqueous solutions caused by ionic liquid solutes. In the present case we have compared the shifts caused by three ionic liquid solutes with a common cation—1-ethyl-3-methylimidazolium coupled with acetate, ethylsulfate and tetracyanoborate anions—in normal and deuterated water solutions. The observed differences are discussed in terms of the nature of the corresponding anion-water interactions.
Maximum probing depth of low-energy photoelectrons in an amorphous organic semiconductor film
Ozawa, Yusuke [Graduate School of Advanced Integration Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522 (Japan); Nakayama, Yasuo, E-mail: nkym@restaff.chiba-u.jp [Graduate School of Advanced Integration Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522 (Japan); Machida, Shin’ichi; Kinjo, Hiroumi [Graduate School of Advanced Integration Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522 (Japan); Ishii, Hisao [Graduate School of Advanced Integration Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522 (Japan); Center for Frontier Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522 (Japan)
2014-12-15
Highlights: • Photoelectron attenuation lengths (AL) through amorphous organic films were examined. • In the energy range below 9 eV, AL fluctuates unlike a prediction by universal curve. • AL of photoelectron yield spectroscopy (PYS) measurements was found to be ∼3.6 nm. • PYS signals still survived through an 18 nm-thick film despite such a moderate AL. • This indicates buried interfaces in practical organic devices can be accessed by PYS. - Abstract: The attenuation length (AL) of low energy photoelectrons inside a thin film of a π-conjugated organic semiconductor material, 2,2′,2″-(1,3,5-benzinetriyl)-tris(1-phenyl-1-H-benzimidazole), was investigated using ultraviolet photoelectron spectroscopy (UPS) and photoelectron yield spectroscopy (PYS) to discuss their probing depth in amorphous organic thin films. The present UPS results indicated that the AL is 2–3 nm in the electron energy range of 6.3–8.3 eV with respect to the Fermi level, while the PYS measurements which collected the excited electrons in a range of 4.5–6 eV exhibited a longer AL of 3.6 nm. Despite this still short AL in comparison to a typical thickness range of electronic devices that are a few tens of nm-thick, the photoemission signal penetrating through further thicker (18 nm) organic film was successfully detected by PYS. This fact suggests that the electronic structures of “buried interfaces” inside practical organic devices are accessible using this rather simple measurement technique.
Nitroreductase-triggered activation of a novel caged fluorescent probe obtained from methylene blue.
Bae, Jungeun; McNamara, Louis E; Nael, Manal A; Mahdi, Fakhri; Doerksen, Robert J; Bidwell, Gene L; Hammer, Nathan I; Jo, Seongbong
2015-08-18
A near-infrared fluorescent probe based on methylene blue (p-NBMB) was developed for the detection of nitroreductase. Conjugating methylene blue with a p-nitrobenzyl moiety enables it to be activated by nitroreductase-catalyzed 1,6-elimination, resulting in the release of an active methylene blue fluorophore.
Capó-Lugo, Carmen E; Mullens, Christopher H; Brown, David A
2012-10-11
Previous studies demonstrated that stroke survivors have a limited capacity to increase their walking speeds beyond their self-selected maximum walking speed (SMWS). The purpose of this study was to determine the capacity of stroke survivors to reach faster speeds than their SMWS while walking on a treadmill belt or while being pushed by a robotic system (i.e. "push mode"). Eighteen chronic stroke survivors with hemiplegia were involved in the study. We calculated their self-selected comfortable walking speed (SCWS) and SMWS overground using a 5-meter walk test (5-MWT). Then, they were exposed to walking at increased speeds, on a treadmill and while in "push mode" in an overground robotic device, the KineAssist, until they were tested at a speed that they could not sustain without losing balance. We recorded the time and number of steps during each trial and calculated gait speed, average cadence and average step length. Maximum walking speed in the "push mode" was 13% higher than the maximum walking speed on the treadmill and both were higher ("push mode": 61%; treadmill: 40%) than the maximum walking speed overground. Subjects achieved these faster speeds by initially increasing both step length and cadence and, once individuals stopped increasing their step length, by only increasing cadence. With post-stroke hemiplegia, individuals are able to walk at faster speeds than their SMWS overground, when provided with a safe environment that provides external forces that requires them to attempt dynamic stability maintenance at higher gait speeds. Therefore, this study suggests the possibility that, given the appropriate conditions, people post-stroke can be trained at higher speeds than previously attempted.
Capó-Lugo Carmen E
2012-10-01
Full Text Available Abstract Background Previous studies demonstrated that stroke survivors have a limited capacity to increase their walking speeds beyond their self-selected maximum walking speed (SMWS. The purpose of this study was to determine the capacity of stroke survivors to reach faster speeds than their SMWS while walking on a treadmill belt or while being pushed by a robotic system (i.e. “push mode”. Methods Eighteen chronic stroke survivors with hemiplegia were involved in the study. We calculated their self-selected comfortable walking speed (SCWS and SMWS overground using a 5-meter walk test (5-MWT. Then, they were exposed to walking at increased speeds, on a treadmill and while in “push mode” in an overground robotic device, the KineAssist, until they were tested at a speed that they could not sustain without losing balance. We recorded the time and number of steps during each trial and calculated gait speed, average cadence and average step length. Results Maximum walking speed in the “push mode” was 13% higher than the maximum walking speed on the treadmill and both were higher (“push mode”: 61%; treadmill: 40% than the maximum walking speed overground. Subjects achieved these faster speeds by initially increasing both step length and cadence and, once individuals stopped increasing their step length, by only increasing cadence. Conclusions With post-stroke hemiplegia, individuals are able to walk at faster speeds than their SMWS overground, when provided with a safe environment that provides external forces that requires them to attempt dynamic stability maintenance at higher gait speeds. Therefore, this study suggests the possibility that, given the appropriate conditions, people post-stroke can be trained at higher speeds than previously attempted.
Maria Magdalena Dresler
2017-06-01
Full Text Available Introduction: Implants used to treat patients with urogynecological conditions are well visible in US examination. The position of the suburethral tape (sling is determined in relation to the urethra or the pubic symphysis. Aim of the study: The study was aimed at assessing the accuracy of measurements determining suburethral tape location obtained in pelvic US examination performed with a transvaginal probe. Material and methods: The analysis covered the results of sonographic measurements obtained according to a standardized technique in women referred for urogynecological diagnostics. Data from a total of 68 patients were used to analyse the repeatability and reproducibility of results obtained on the same day. Results: The intraclass correlation coefficient for the repeatability and reproducibility of the sonographic measurements of suburethral tape location obtained with a transvaginal probe ranged from 0.6665 to 0.9911. The analysis of the measurements confirmed their consistency to be excellent or good. Conclusions: Excellent and good repeatability and reproducibility of the measurements of the suburethral tape location obtained in a pelvic ultrasound performed with a transvaginal probe confirm the test’s validity and usefulness for clinical and academic purposes.
Ivan B. Brukner
2010-04-01
Full Text Available This paper describes the technical and analytical performance of a novel set of hybridization probes for the four GARDASIL® vaccine-relevant HPV types (6, 11, 16 and 18. These probes are obtained through in vitro selection from a pool of random oligonucleotides, rather than the traditional “rational design” approach typically used as the initial step in assay development. The type-specific segment of the HPV genome was amplified using a GP5+/6+ PCR protocol and 39 synthetic oligonucleotide templates derived from each of the HPV types, as PCR targets. The robust performance of the 4 selected hybridization probes was demonstrated by monitoring the preservation of the specificity and sensitivity of the typing assay over all 39 HPV types, using a different spectrum of HPV (genome equivalent: 103-109 and human DNA concentrations (10-100 ng as well as temperature and buffer composition variations. To the Authors’ knowledge, this is a unique hybridization-based multiplex typing assay. It performs at ambient temperatures, does not require the strict temperature control of hybridization conditions, and is functional with a number of different non-denaturing buffers, thereby offering downstream compatibility with a variety of detection methods. Studies aimed at demonstrating clinical performance are needed to validate the applicability of this strategy.
Taki, Masumi; Inoue, Hiroaki; Mochizuki, Kazuto; Yang, Jay; Ito, Yuji
2016-01-19
To obtain a molecular probe for specific protein detection, we have synthesized fluorogenic probe library of vast diversity on bacteriophage T7 via the gp10 based-thioetherificaion (10BASE(d)-T). A remarkable color-changing and turning-on probe was selected from the library, and its physicochemical properties upon target-specific binding were obtained. Combination analyses of fluorescence emission titration, isothermal titration calorimetry (ITC), and quantitative saturation-transfer difference (STD) NMR measurements, followed by in silico docking simulation, rationalized the most plausible geometry of the ligand-protein interaction.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Moschetta, M.; Stabile Ianora, A.A.; Anglani, A.; Scardapane, A.; Angelelli, G. [University of Bari Medical School, Department of Radiology, Bari (Italy); Marzullo, A. [University of Bari Medical School, Department of Pathological Anatomy, Bari (Italy)
2010-01-15
This study aims to evaluate the diagnostic accuracy of 16-row multidetector CT (MDCT) and vessel probe reconstructions in the T staging of gastric carcinoma. Fifty-three patients (39 men, 14 women, mean age 57.5) with an endoscopic diagnosis of gastric adenocarcinoma underwent CT examination. A hypotonic drug was administered, and the gastric walls were distended by the ingestion of 400-600 ml of water. A biphasic technique with 40-s and 70-s delay was used after endovenous contrast material injection. All patients underwent surgery, and preoperative and histological stagings were compared. The diagnostic accuracy of T staging was 68% for axial images and 94% for VP reconstructions. In the T1, T2, T3 and T4 parameter evaluation, diagnostic accuracy values were 87%, 73.5%, 81% and 96%, respectively, for axial images and 96%, 96%, 98% and 100%, respectively for VP reconstructions. MDCT is an accurate technique for the preoperative staging of gastric cancer. The VP reconstructions obtained by isotropic data can evaluate the T parameter with a higher accuracy. (orig.)
Hobo, Fumio; Takahashi, Masato; Saito, Yuta; Sato, Naoki; Takao, Tomoaki; Koshiba, Seizo; Maeda, Hideaki
2010-05-01
(33)S nuclear magnetic resonance (NMR) spectroscopy is limited by inherently low NMR sensitivity because of the quadrupolar moment and low gyromagnetic ratio of the (33)S nucleus. We have developed a 10 mm (33)S cryogenic NMR probe, which is operated at 9-26 K with a cold preamplifier and a cold rf switch operated at 60 K. The (33)S NMR sensitivity of the cryogenic probe is as large as 9.8 times that of a conventional 5 mm broadband NMR probe. The (33)S cryogenic probe was applied to biological samples such as human urine, bile, chondroitin sulfate, and scallop tissue. We demonstrated that the system can detect and determine sulfur compounds having SO(4)(2-) anions and -SO(3)(-) groups using the (33)S cryogenic probe, as the (33)S nuclei in these groups are in highly symmetric environments. The NMR signals for other common sulfur compounds such as cysteine are still undetectable by the (33)S cryogenic probe, as the (33)S nuclei in these compounds are in asymmetric environments. If we shorten the rf pulse width or decrease the rf coil diameter, we should be able to detect the NMR signals for these compounds.
Beenakker, EAC; van der Hoeven, JH; Fock, JM; Maurits, NM
2001-01-01
Since muscle force and functional ability are not related linearly; maximum force can be reduced while functional ability is still maintained. For diagnostic and therapeutic reasons loss of muscle force should be detected as early and accurately as possible. Because of growth factors, maximum muscle
Piao, Daqing; Holyoak, G Reed; Patel, Sanjay
2016-01-01
We demonstrate a laparoscopic applicator probe and a method thereof for real-time en-face topographic mapping of near-surface heterogeneity for potential use in intraoperative margin assessment during minimally invasive oncological procedures. The probe fits in a 12mm port and houses at its maximum 128 copper-coated 750um fibers that form radially alternating illumination (70 fibers) and detection (58 fibers) channels. By simultaneously illuminating the 70 source channels of the probe that is in contact with a scattering medium and concurrently measuring the light diffusely propagated to the 58 detector channels, the presence of near-surface optical heterogeneities can be resolved in an en-face 9.5mm field-of-view in real-time. Visualization of a subsurface margin of strong attenuation contrast at a depth up to 3mm is demonstrated at one wavelength at a frame rate of 1.25Hz.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Berretta, Ana Lucia Olmedo
1999-07-01
The hydraulic conductivity is one of the most important parameters to understand the movement of water in the unsaturated zone. Reliable estimations are difficult to obtain, once the hydraulic conductivity is highly variable. This study was carried out at 'Escola Superior de Agricultura Luiz de Queiroz', Universidade de Sao Paulo, in a Kandiudalfic Eutrudox soil. The hydraulic conductivity was determined by a direct and an indirect method. The instantaneous profile method was described and the hydraulic conductivity as a function of soil water content was determined by solving the Richards equation. Tensiometers were used to estimate the total soil water potential, and the neutron probe and the soil retention curve were used to estimate soil water content in the direct method. The neutron probe showed to be not adequately sensible to the changes of soil water content in this soil. Despite of the soil retention curve provides best correlation values to soil water content as a function of water redistribution time, the soil water content in this soil did not vary too much till the depth of 50 cm, reflecting the influence of the presence of a Bt horizon. The soil retention curve was well fitted by the van Genuchten model used as an indirect method. The values of the van Genuchten and the experimental relative hydraulic conductivity obtained by the instantaneous profile method provided a good correlation. However, the values estimated by the model were always lower than that ones obtained experimentally. (author)
Van Der Pol, Barbara; Williams, James A; Taylor, Stephanie N; Cammarata, Catherine L; Rivers, Charles A; Body, Barbara A; Nye, Melinda; Fuller, Deanna; Schwebke, Jane R; Barnes, Mathilda; Gaydos, Charlotte A
2014-03-01
Trichomonas vaginalis is the most prevalent nonviral sexually transmitted infection worldwide, and improved diagnostic methods are critical for controlling this pathogen. Diagnostic assays that can be used in conjunction with routine chlamydia/gonorrhea nucleic acid-based screening are likely to have the most impact on disease control. Here we describe the performance of the new BD T. vaginalis Qx (TVQ) amplified DNA assay, which can be performed on the automated BD Viper system. We focus on data from vaginal swab samples, since this is the specimen type routinely used for traditional trichomonas testing and the recommended specimen type for chlamydia/gonorrhea screening. Vaginal swabs were obtained from women attending sexually transmitted disease or family planning clinics at 7 sites. Patient-collected vaginal swabs were tested by the TVQ assay, and the Aptima T. vaginalis (ATV) assay was performed using clinician-collected vaginal swabs. Additional clinician-collected vaginal swabs were used for the wet mount and culture methods. Analyses included comparisons versus the patient infection status (PIS) defined by positive results with the wet mount method or culture, direct comparisons assessed with κ scores, and latent class analysis (LCA) as an unbiased estimator of test accuracy. Data from 838 women, 116 of whom were infected with T. vaginalis, were analyzed. The TVQ assay sensitivity and specificity estimates based on the PIS were 98.3% and 99.0%, respectively. The TVQ assay was similar to the ATV assay (κ=0.938) in direct analysis. LCA estimated the TVQ sensitivity and specificity as 98.3 and 99.6%, respectively. The TVQ assay performed well using self-collected vaginal swabs, the optimal sample type, as recommended by the CDC for chlamydia/gonorrhea screening among women.
Takahashi, Tsuyoshi; Kawano, Yoichi; Makiyama, Kozo; Shiba, Shoichi; Sato, Masaru; Nakasha, Yasuhiro; Hara, Naoki
2017-02-01
A maximum frequency of oscillation (f max) of 1.3 THz was achieved using an extended drain-side recess structure of InAlAs/InGaAs high-electron-mobility transistors (HEMTs), although the gate length was relatively long at 75 nm. The high f max was improved by reducing the drain output conductance (g d). The use of an asymmetric gate recess structure and double-side doping above and below a channel region were effective in reducing g d. Further improvements in transconductance (g m) and g d were achieved by reducing the distance between the source and gate electrodes.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Luciana da Silva Barberena
2008-06-01
Full Text Available OBJETIVO: Analisar a generalização baseada nas relações implicacionais obtida pelo Modelo "ABAB-Retirada e Provas Múltiplas" em crianças com diferentes graus de severidade de desvio fonológico. MÉTODOS: Análise dos dados de fala de oito sujeitos com desvios fonológicos. Foram realizadas avaliações da linguagem compreensiva e expressiva, do sistema sensório-motor oral, da psicomotricidade, da discriminação auditiva e avaliação fonológica, além de avaliações complementares (otorrinolaringológica audiológica e neurológica. A seguir, foi determinado o grau de severidade do desvio fonológico, para então iniciar o tratamento, em que foi aplicado o Modelo "ABAB-Retirada e Provas Múltiplas". RESULTADOS: Foram observadas generalizações baseadas em relações implicacionais. CONCLUSÕES: O Modelo "ABAB-Retirada e Provas Múltiplas" foi eficaz no tratamento dos sujeitos com desvio fonológico. A generalização baseada em relações implicacionais concordou, em parte, com o Modelo Implicacional de Complexidade de Traços (MICT.PURPOSE: To analyze the generalization based on the implicational relationships obtained by the "ABAB Withdrawal and Multiple Probes Model" in children with different severity degrees of phonological deviation. METHODS: Speech data analysis of eight subjects with phonological deviations. Evaluations of receptive and expressive language, oral sensory-motor oral system, psychomotricity, hea-ring discrimination and phonology were carried out, in addition to audiological, otorhinolaryngological and neurological evaluations. Next, the severity degree of the phonological deviation was determined, and the treatment was initiated, using the "ABAB Withdrawal and Multiple Probes Model". RESULTS: Generalizations based on implicational relationships were observed. CONCLUSIONS: The "ABAB Withdrawal and Multiple Probes Model" was effective in the treatment of subjects with phonological deviation. The generalization
2013-01-01
Mobile probing is a method, developed for learning about digital work situations, as an approach to discover new grounds. The method can be used when there is a need to know more about users and their work with certain tasks, but where users at the same time are distributed (in time and space......). Mobile probing was inspired by the cultural probe method, and was influenced by qualitative interview and inquiry approaches. The method has been used in two subsequent projects, involving school children (young adults at 15-17 years old) and employees (adults) in a consultancy company. Findings point...... to mobile probing being a flexible method for uncovering the unknowns, as a way of getting rich data to the analysis and design phases. On the other hand it is difficult to engage users to give in depth explanations, which seem easier in synchronous dialogs (whether online or face2face). The development...
2012-01-01
Mobile probing is a method, which has been developed for learning about digital work situations, as an approach to discover new grounds. The method can be used when there is a need to know more about users and their work with certain tasks, but where users at the same time are distributed (in time...... and space). Mobile probing was inspired by the cultural probe method, and was influenced by qualitative interview and inquiry approaches. The method has been used in two subsequent projects, involving school children (young adults at 15-17 years old) and employees (adults) in a consultancy company. Findings...... point to mobile probing being a flexible method for uncovering the unknowns, as a way of getting rich data to the analysis and design phases. On the other hand it is difficult to engage users to give in depth explanations, which seem easier in synchronous dialogs (whether online or face2face...
2012-01-01
Mobile probing is a method, which has been developed for learning about digital work situations, as an approach to discover new grounds. The method can be used when there is a need to know more about users and their work with certain tasks, but where users at the same time are distributed (in time...... and space). Mobile probing was inspired by the cultural probe method, and was influenced by qualitative interview and inquiry approaches. The method has been used in two subsequent projects, involving school children (young adults at 15-17 years old) and employees (adults) in a consultancy company. Findings...... point to mobile probing being a flexible method for uncovering the unknowns, as a way of getting rich data to the analysis and design phases. On the other hand it is difficult to engage users to give in depth explanations, which seem easier in synchronous dialogs (whether online or face2face...
2013-01-01
to mobile probing being a flexible method for uncovering the unknowns, as a way of getting rich data to the analysis and design phases. On the other hand it is difficult to engage users to give in depth explanations, which seem easier in synchronous dialogs (whether online or face2face). The development...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
A. Z. Destefani
2011-12-01
Full Text Available A utilização de agregados industrializados vem crescendo ao longo dos anos para atender a grande demanda da construção civil devido ao crescimento econômico do país. O objetivo deste trabalho foi utilizar o planejamento experimental em Rede Simplex para avaliar o efeito da adição do resíduo de rocha ornamental como filler na composição de misturas ternárias (brita 0, pó de pedra e resíduo, que levem a máxima compacidade (densidade seca aparente máxima. Foram tomados dezesseis pontos experimentais, cujos teores dos materiais utilizados variaram de 0 a 100%. O modelo em rede simplex cúbico completo apresentou melhor ajuste aos resultados experimentais, o qual resulta em respostas estatisticamente mais adequadas para as composições estudadas. A superfície de resposta gerada indicou que a densidade seca aparente máxima de 2,0 g/cm³ foi obtida para a composição ternária: 63% de brita 0/17% de pó de pedra/20% de resíduo de rocha ornamental. Portanto, o uso de resíduo de rocha ornamental como filler em agregados para a construção civil pode ser uma alternativa viável para deposição final deste abundante resíduo de forma ambientalmente correta.The use of industrial aggregates has grown over the years to meet the great demand of the civil construction due to the country's economical growth. The aim of this work was to use the experimental design in Simplex Lattice to evaluate the effect of the addition of ornamental rock waste as filler in the composition of ternary mixtures (crushed rock 0, stone powder, rock waste, leading to maximum compaction (maximum apparent dry density. Sixteen experimental points were taken, whose contents of the used materials ranged from 0 to 100%. The complete cubic simplex model showed to best fit to the experimental results, which results in more statistically appropriated responses to the studied compositions. The response surface generated indicated that the maximum apparent dry density (2
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Gemelli, Marcellino; Abelmann, Leon; Engelen, Johan B.C.; Khatib, Mohammed G.; Koelmans, Wabe W.; Zaboronski, Olog; Campardo, Giovanni; Tiziani, Federico; Laculo, Massimo
2011-01-01
This chapter gives an overview of probe-based data storage research over the last three decades, encompassing all aspects of a probe recording system. Following the division found in all mechanically addressed storage systems, the different subsystems (media, read/write heads, positioning, data chan
Madsen, Jacob Østergaard
2016-01-01
The aim of this study was thus to explore cultural probes (Gaver, Boucher et al. 2004), as a possible methodical approach, supporting knowledge production on situated and contextual aspects of occupation.......The aim of this study was thus to explore cultural probes (Gaver, Boucher et al. 2004), as a possible methodical approach, supporting knowledge production on situated and contextual aspects of occupation....
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
João Alfredo Braida
2006-08-01
ótese de que a palha existente sobre o solo é capaz de absorver parte da energia de compactação produzida pelo trânsito de máquinas e animais.The susceptibility of soils to compaction, measured by the Proctor test, decreases with increasing soil organic matter (SOM content. For a given energy level, with increasing SOM contents the maximum obtained density decreases and the corresponding critical moisture content increases. Due to its low density, elasticity and deformation susceptibility, straw is potentially able to dissipate applied loads. This study was conducted to evaluate the SOM effect on the soil compaction curve and to evaluate the ability that mulch has to absorb compactive energy in the Proctor test. The compaction test was carried out using soil surface samples (0 to 0.05 m of a Hapludalf, with sandy loam texture at its soil surface, and an Oxisol, with clayey texture at its soil surface, both with variations in the SOM content. The maximum density, the critical moisture content, the liquid and plastic limits, and the soil organic carbon content were determined. A second test was performed to evaluate the ability of mulch to absorb compactive energy, by compacting Hapludalf samples with the presence of a straw layer on the soil surface, inside a Proctor cylinder, at amounts corresponding to 2, 4, 8 and 12 Mg ha-1. SOM accumulation reduced the maximum density and increased the critical moisture content, suggesting an increased resistance to soil compaction. In the Proctor test the straw on the soil surface dissipated up to 30 % of the compactive energy and reduced the bulk density, confirming the hypothesis that mulch can absorb part of the compactive energy caused by machine traffic and by animals.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
2016-01-01
A project investigating the effectiveness of a collection of online resources for teachers' professional development used mobile probes as a data collection method. Teachers received questions and tasks on their mobile in a dialogic manner while in their everyday context as opposed to in an inter......A project investigating the effectiveness of a collection of online resources for teachers' professional development used mobile probes as a data collection method. Teachers received questions and tasks on their mobile in a dialogic manner while in their everyday context as opposed...... to in an interview. This method provided valuable insight into the contextual use, i.e. how did the online resource transfer to the work practice. However, the research team also found that mobile probes may provide the scaffolding necessary for individual and peer learning at a very local (intra-school) community...... level. This paper is an initial investigation of how the mobile probes process proved to engage teachers in their efforts to improve teaching. It also highlights some of the barriers emerging when applying mobile probes as a scaffold for learning....
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
2008-01-01
The Thermal and Electrical Conductivity Probe (TECP) for NASA's Phoenix Mars Lander took measurements in Martian soil and in the air. The needles on the end of the instrument were inserted into the Martian soil, allowing TECP to measure the propagation of both thermal and electrical energy. TECP also measured the humidity in the surrounding air. The needles on the probe are 15 millimeters (0.6 inch) long. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Chant, Donald A.
This book is written as a statement of concern about pollution by members of Pollution Probe, a citizens' anti-pollution group in Canada. Its purpose is to create public awareness and pressure for the eventual solution to pollution problems. The need for effective government policies to control the population explosion, conserve natural resources,…
Martyn, Michael; O'Shea, Tuathan; Harris, Emma; Bamber, Jeffrey; Gilroy, Stephen; Foley, Mark J.
2016-04-01
The aim of this study was to quantify the dosimetric effect of the Autoscan™ ultrasound probe, which is a 3D transperineal probe used for real-time tissue tracking during the delivery of radiotherapy. CT images of an anthropomorphic phantom, with and without the probe placed in contact with its surface, were obtained (0.75 mm slice width, 140 kVp). CT datasets were used for relative dose calculation in Monte Carlo simulations of a 7-field plan delivered to the phantom. The Monte Carlo software packages BEAMnrc and DOSXYZnrc were used for this purpose. A number of simulations, which varied the distance of the radiation field edge from the probe face (0 mm to 5 mm), were performed. Perineal surface doses as a function of distance from the radiation field edge, with and without the probe in place, were compared. The presence of the probe was found to result in an increase in perineal surface dose, relative to the maximum dose. The maximum increase in surface dose was 18.15%, at a probe face to field edge distance of 0 mm. However increases in surface dose fall-off rapidly as this distance increases, agreeing within Monte Carlo simulation uncertainty at distances >= 5 mm. Using data from three patient volunteers, a typical probe face to field edge distance was calculated to be ≍20 mm. Our results therefore indicate that the presence of the probe is unlikely to adversely affect a typical patient treatment, since the dosimetric effect of the probe is minimal at these distances.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Characterization of near-field optical probes
Vohnsen, Brian; Bozhevolnyi, Sergey I.
1999-01-01
Radiation and collection characteristics of four different near-field optical-fiber probes, namely, three uncoated probes and an aluminium-coated small-aperture probe, are investigated and compared. Their radiation properties are characterized by observation of light-induced topography changes...... in a photo-sensitive film illuminated with the probes, and it is confirmed that the radiated optical field is unambigiously confined only for the coated probe. Near-field optical imaging of a standing evanescent-wave pattern is used to compare the detection characteristics of the probes, and it is concluded...... that, for the imaging of optical-field intensity distributions containing predominantly evanescent-wave components, a sharp uncoated tip is the probe of choice. Complementary results obtained with optical phase-conjugation experiments with he uncoated probes are discussed in relation to the probe...
Zhang, Xiaorong [Institute of Optics and Electronics, Chinese Academy of Sciences and Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Li, Bincheng [Institute of Optics and Electronics, Chinese Academy of Sciences and Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209 (China)
2015-02-15
Surface thermal lens is a highly sensitive photothermal technique to measure low absorption losses of various solid materials. In such applications, the sensitivity of surface thermal lens is a key parameter for measuring extremely low absorption. In this paper, we experimentally investigated the influence of probe beam wavelength on the sensitivity of surface thermal lens for measuring the low absorptance of optical laser components. Three probe lasers with wavelength 375 nm, 633 nm, and 1570 nm were used, respectively, to detect the surface thermal lens amplitude of a highly reflective coating sample excited by a cw modulated Gaussian beam at 1064 nm. The experimental results showed that the maximum amplitude of surface thermal lens signal obtained at corresponding optimized detection distance was inversely proportional to the wavelength of the probe beam, as predicted by previous theoretical model. The sensitivity of surface thermal lens could, therefore, be improved by detecting surface thermal lens signal with a short-wavelength probe beam.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Soil Properties from Low-Velocity Probe Penetration
Jerome B. Johnson
2008-01-01
Full Text Available A physical model of low-velocity probe penetration is developed to characterize soil by type, strength, maximum compaction, and initial density using Newton's second law to describe the processes controlling probe momentum loss. The probe loses momentum by causing soil failure (strength, accelerating and compacting soil around the probe (inertia, and through frictional sliding at the probe/soil interface (friction. Probe geometry, mass, and impact velocity influences are incorporated into the model. Model predictions of probe deceleration history and depth of penetration agree well with experiments, without the need for free variables or complex numerical simulations.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Khusenov, Arslonnazar; Rakhmanberdiev, Gappar; Rakhimov, Dilshod; Khalikov, Muzaffar
2014-01-01
In the article first obtained inulin ester inulin acetate, by etherification of inulin with acetic anhydride has been exposed. Obtained product has been studied using elementary analysis and IR spectroscopy.
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Larsen, Jakob Eg; Sørensen, Lene Tolstrup; Sørensen, J.K.
2007-01-01
characterized as being highly nomadic and thus potential users of mobile and ubiquitous technologies. The methodology has been applied in the 1ST MAGNET Beyond project in order to obtain user needs and requirements in the process of developing pilot services. We report on the initial findings from applying......Mobile Probing Kit is a low tech and low cost methodology for obtaining inspiration and insights into user needs, requirements and ideas in the early phases of a system's development process. The methodology is developed to identify user needs, requirements and ideas among knowledge workers...
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Productivity response of calcareous nannoplankton to Eocene Thermal Maximum 2 (ETM2
M. Dedert
2012-05-01
Full Text Available The Early Eocene Thermal Maximum 2 (ETM2 at ~53.7 Ma is one of multiple hyperthermal events that followed the Paleocene-Eocene Thermal Maximum (PETM, ~56 Ma. The negative carbon excursion and deep ocean carbonate dissolution which occurred during the event imply that a substantial amount (10^{3} Gt of carbon (C was added to the ocean-atmosphere system, consequently increasing atmospheric CO_{2}(pCO_{2}. This makes the event relevant to the current scenario of anthropogenic CO_{2} additions and global change. Resulting changes in ocean stratification and pH, as well as changes in exogenic cycles which supply nutrients to the ocean, may have affected the productivity of marine phytoplankton, especially calcifying phytoplankton. Changes in productivity, in turn, may affect the rate of sequestration of excess CO_{2} in the deep ocean and sediments. In order to reconstruct the productivity response by calcareous nannoplankton to ETM2 in the South Atlantic (Site 1265 and North Pacific (Site 1209, we employ the coccolith Sr/Ca productivity proxy with analysis of well-preserved picked monogeneric populations by ion probe supplemented by analysis of various size fractions of nannofossil sediments by ICP-AES. The former technique of measuring Sr/Ca in selected nannofossil populations using the ion probe circumvents possible contamination with secondary calcite. Avoiding such contamination is important for an accurate interpretation of the nannoplankton productivity record, since diagenetic processes can bias the productivity signal, as we demonstrate for Sr/Ca measurements in the fine (<20 μm and other size fractions obtained from bulk sediments from Site 1265. At this site, the paleoproductivity signal as reconstructed from the Sr/Ca appears to be governed by cyclic changes, possibly orbital forcing, resulting in a 20–30% variability in Sr/Ca in dominant genera as obtained by ion probe. The ~13 to 21
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Federal Laboratory Consortium — The Proximal Probes Facility consists of laboratories for microscopy, spectroscopy, and probing of nanostructured materials and their functional properties. At the...
Laget, M
2007-10-15
While the existence of an island of stability beyond Z=110 is theoretically acquired, the location of this island ranges from Z=114 to Z=126 depending on models. In this work, the stability of super-heavy nuclei is probed through the study of their fission time. The chosen experimental method, the crystal blocking method, is sensitive to the presence of possible long time components in the fission time distribution which indicates a fission mechanism occurring after the formation of a compound nucleus. The blocking dips were therefore constituted for the various products of the reaction U{sup 238} + Ni (6.6 MeV/A) {yields} 120, the experimental set-up allowing us to clearly identify and select the reaction mechanisms. The comparison of the blocking dip constituted for quasi-elastic scattering events with the one obtained for the fission fragments of a Z=120, combined with the study of kinematical properties of these fission fragments, give evidences of the existence of very long fission times (> 10{sup -18} s) only compatible with a fusion-fission mechanism implying a non vanishing fission barrier height for Z=120. The second part outlines microscopic calculations of fission barrier heights, carried out in the framework of the finite temperature of the Hartree-Fock-Bogoliubov (HFB) theory. Because of the progressive vanishing of the pairing correlation with T, which happens differently at the ground state and at the top of the barrier, B{sub f} first grows until T {approx_equal} 0.8 MeV before dropping with T owing to shell-effects damping with temperature. (author)
[Erythromycin ethylsuccinate obtaining possibilities].
Stan, Cătălina Daniela; Stefanache, Alina; Tântaru, Gladiola; Poiată, Antonia; Dumitrache, M; Diaconu, D E; Profire, Lenuţa
2008-01-01
In this study we tried to improve the erythromycin ethylsuccinate obtaining, having in view to separate the erythromycin ester by crystallization in water. The erythromycin acylation and the erythromycin ethylsuccinate crystallization were realized, following the next steps: 1. the acylation of the erythromycin with a methylene chloride solution of monoethylsuccinyl chloride, at 25-28 degrees C for 3 hours in the presence of NaHCO3; 2. the transfer of the erythromycin ethylsuccinate from methylene chloride solution in acetone solution by distillation of mixture methylene chloride: acetone 1:1 at 25-28 degrees C; 3. erythromycin ethylsuccinate separation by crystallization in water at pH = 8-8.5 and 5 degrees C for 90 minutes. The quality control for the erythromycin ester was performed according to the Xth edition of Romanian Pharmacopoeia standards using national standard for erythromycin ethylsuccinate and national standard for erythromycin with an activity of 1: 937 U and 2.02% humidity. The Micrococcus luteus ATCC 9341 was used as a test microorganism and a thin layer cromatography was performed for qualitative control. 13.1 g of erythromycin ethylsuccinate were obtained with an output of the process of 82.02%. Using water for the separation of erythromycin ethylsuccinate the output of the process is greater (82.02%) than in case of using petroleum ether (74.14%) or hexane (80.25%). The thin layer cromatography revealed an Rf = 0.56 and the microbiological activity of the erythromycin ethylsuccinate was 98.7% compared with the standard. Using water instead of hexane or petroleum ether is gainful for the separation of erythromycin ethylsuccinate from the reaction medium. The obtained erythromycin ethylsuccinate corresponds to the Xth edition of Romanian Pharmacopoeia standards. So, the raw materials consumption is decreased, the costs are cut down, the obtained product purity is high and the output of the process is greater.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Multi-point probe for testing electrical properties and a method of producing a multi-point probe
2011-01-01
A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... are connected to the supporting body (105) at the proximal ends, and the distal ends are freely extending from the supporting body, giving individually flexible motion to the probe arms. Each of the probe arms defines a maximum width perpendicular to its perpendicular bisector and parallel with its line...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...
Schmitz, Roger William; Oh, Yunje
2016-10-25
A heating assembly configured for use in mechanical testing at a scale of microns or less. The heating assembly includes a probe tip assembly configured for coupling with a transducer of the mechanical testing system. The probe tip assembly includes a probe tip heater system having a heating element, a probe tip coupled with the probe tip heater system, and a heater socket assembly. The heater socket assembly, in one example, includes a yoke and a heater interface that form a socket within the heater socket assembly. The probe tip heater system, coupled with the probe tip, is slidably received and clamped within the socket.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Scanning microscopic four-point conductivity probes
Petersen, Christian Leth; Hansen, Torben Mikael; Bøggild, Peter
2002-01-01
A method for fabricating microscopic four-point probes is presented. The method uses silicon-based microfabrication technology involving only two patterning steps. The last step in the fabrication process is an unmasked deposition of the conducting probe material, and it is thus possible to select...... the conducting material either for a silicon wafer or a single probe unit. Using shadow masking photolithography an electrode spacing (pitch) down to 1.1 mum was obtained, with cantilever separation down to 200 run. Characterisation measurements have shown the microscopic probes to be mechanically very flexible...
The theory of Langmuir probes in strong electrostatic potential structures
Borovsky, J. E.
1986-01-01
The operation of collecting and emitting Langmuir probes and double probes within time-stationary strong electrostatic potential structures is analyzed. The cross sections of spherical and cylindrical probes to charged particles within the structures are presented and used to obtain the current-voltage characteristics of idealized probes. The acquisition of plasma parameters from these characteristics is outlined, and the operation of idealized floating double-probe systems is analyzed. Probe surface effects are added to the idealized theory, and some surface effects pertinent to spacecraft probes are quantified. Magnetic field effects on idealized probes are examined, and the time required for floating probes to change their potentials by collecting charge and by emitting photoelectrons is discussed. Calculations on the space-charge effects of probe-perturbed beams and on the space-charge limiting of electron emission are given in an appendix.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Radiation damping in microcoil NMR probes.
Krishnan, V V
2006-04-01
Radiation damping arises from the field induced in the receiver coil by large bulk magnetization and tends to selectively drive this magnetization back to equilibrium much faster than relaxation processes. The demand for increased sensitivity in mass-limited samples has led to the development of microcoil NMR probes that are capable of obtaining high quality NMR spectra with small sample volumes (nL-microL). Microcoil probes are optimized to increase sensitivity by increasing either the sample-to-coil ratio (filling factor) of the probe or quality factor of the detection coil. Though radiation damping effects have been studied in standard NMR probes, these effects have not been measured in the microcoil probes. Here a systematic evaluation of radiation damping effects in a microcoil NMR probe is presented and the results are compared with similar measurements in conventional large volume samples. These results show that radiation-damping effects in microcoil probe is much more pronounced than in 5 mm probes, and that it is critically important to optimize NMR experiments to minimize these effects. As microcoil probes provide better control of the bulk magnetization, with good RF and B0 inhomogeneity, in addition to negligible dipolar field effects due to nearly spherical sample volumes, these probes can be used exclusively to study the complex behavior of radiation damping.
Thurnheer Thomas
2011-01-01
Full Text Available Abstract Background The purpose of this study was to design and evaluate fluorescent in situ hybridization (FISH probes for the single-cell detection and enumeration of lactic acid bacteria, in particular organisms belonging to the major phylogenetic groups and species of oral lactobacilli and to Abiotrophia/Granulicatella. Results As lactobacilli are known for notorious resistance to probe penetration, probe-specific assay protocols were experimentally developed to provide maximum cell wall permeability, probe accessibility, hybridization stringency, and fluorescence intensity. The new assays were then applied in a pilot study to three biofilm samples harvested from variably demineralized bovine enamel discs that had been carried in situ for 10 days by different volunteers. Best probe penetration and fluorescent labeling of reference strains were obtained after combined lysozyme and achromopeptidase treatment followed by exposure to lipase. Hybridization stringency had to be established strictly for each probe. Thereafter all probes showed the expected specificity with reference strains and labeled the anticipated morphotypes in dental plaques. Applied to in situ grown biofilms the set of probes detected only Lactobacillus fermentum and bacteria of the Lactobacillus casei group. The most cariogenic biofilm contained two orders of magnitude higher L. fermentum cell numbers than the other biofilms. Abiotrophia/Granulicatella and streptococci from the mitis group were found in all samples at high levels, whereas Streptococcus mutans was detected in only one sample in very low numbers. Conclusions Application of these new group- and species-specific FISH probes to oral biofilm-forming lactic acid bacteria will allow a clearer understanding of the supragingival biome, its spatial architecture and of structure-function relationships implicated during plaque homeostasis and caries development. The probes should prove of value far beyond the field of
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Probing of Nascent Riboswitch Transcripts.
Chauvier, Adrien; Lafontaine, Daniel A
2015-01-01
The study of biologically significant and native structures is vital to characterize RNA-based regulatory mechanisms. Riboswitches are cis-acting RNA molecules that are involved in the biosynthesis and transport of cellular metabolites. Because riboswitches regulate gene expression by modulating their structure, it is vital to employ native probing assays to determine how native riboswitch structures perform highly efficient and specific ligand recognition. By employing RNase H probing, it is possible to determine the accessibility of specific RNA domains in various structural contexts. Herein, we describe how to employ RNase H probing to characterize nascent mRNA riboswitch molecules as a way to obtain information regarding the riboswitch regulation control mechanism.
Borup Lynggaard, Aviaja
2006-01-01
This paper will examine how probes can be useful for game designers in the preliminary phases of a design process. The work is based upon a case study concerning pervasive mobile phone games where Mobile Game Probes have emerged from the project. The new probes are aimed towards a specific target...... group and the goal is to specify the probes so they will cover the most relevant areas for our project. The Mobile Game Probes generated many interesting results and new issues occurred, since the probes came to be dynamic and favorable for the process in new ways....
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Development and application of DNA molecular probes
Priya Vizzini
2017-02-01
Full Text Available The development of DNA probes started from 1950's for diagnostic purposes and it is still growing. DNA probes are applied in several fields such as food, medical, veterinary, environment and security, with the aim of prevention, diagnosis and treatment. The use of DNA probes permits microorganism identification, including pathogen detection, and their quantification when used in specific systems. Various techniques obtained success by the utilization of specific DNA probes, that allowed the obtainment of rapid and specific results. From PCR, qPCR and blotting techniques that were first used in well equipped laboratories to biosensors such as fiber optic, surface plasmon resonance (SPR, electrochemical, and quartz crystal microbalance (QCM biosensors that use different transduction systems. This review describes i the design and production of primers and probes, and their utilization from the traditional techniques to the new emerging techniques like biosensors used in current applications; ii the possibility to use labelled-free probes and probes labelled with an enzyme/fluorophore, etc.; iii the different sensitivity obtained by using specific systems; and iv the advantage obtained by using biosensors.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Properties of Ultrasound Probes
Rusina, M.
2015-01-01
This work deals with the measurement properties of ultrasound probes. Ultrasound probes and their parameters significantly affect the quality of the final image. In this work there are described the possibility of measuring the spatial resolution, sensitivity of the probe and measuring the length of the dead zone. Ultrasound phantom ATS Multi Purpose Phantom Type 539 was used for measurements.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Experimental evaluation of the resolution improvement provided by a silicon PET probe
Brzeziński, K.; Oliver, J. F.; Gillam, J.; Rafecas, M.; Studen, A.; Grkovski, M.; Kagan, H.; Smith, S.; Llosá, G.; Lacasta, C.; Clinthorne, N. H.
2016-09-01
A high-resolution PET system, which incorporates a silicon detector probe into a conventional PET scanner, has been proposed to obtain increased image quality in a limited region of interest. Detailed simulation studies have previously shown that the additional probe information improves the spatial resolution of the reconstructed image and increases lesion detectability, with no cost to other image quality measures. The current study expands on the previous work by using a laboratory prototype of the silicon PET-probe system to examine the resolution improvement in an experimental setting. Two different versions of the probe prototype were assessed, both consisting of a back-to-back pair of 1-mm thick silicon pad detectors, one arranged in 32 × 16 arrays of 1.4 mm × 1.4 mm pixels and the other in 40 × 26 arrays of 1.0 mm × 1.0 mm pixels. Each detector was read out by a set of VATAGP7 ASICs and a custom-designed data acquisition board which allowed trigger and data interfacing with the PET scanner, itself consisting of BGO block detectors segmented into 8 × 6 arrays of 6 mm × 12 mm × 30 mm crystals. Limited-angle probe data was acquired from a group of Na-22 point-like sources in order to observe the maximum resolution achievable using the probe system. Data from a Derenzo-like resolution phantom was acquired, then scaled to obtain similar statistical quality as that of previous simulation studies. In this case, images were reconstructed using measurements of the PET ring alone and with the inclusion of the probe data. Images of the Na-22 source demonstrated a resolution of 1.5 mm FWHM in the probe data, the PET ring resolution being approximately 6 mm. Profiles taken through the image of the Derenzo-like phantom showed a clear increase in spatial resolution. Improvements in peak-to-valley ratios of 50% and 38%, in the 4.8 mm and 4.0 mm phantom features respectively, were observed, while previously unresolvable 3.2 mm features were brought to light by the
Dehydration of fructose to obtain hydroxymethylfurfural
Garrido Schaeffer, A.; Departamento Académico de Química Orgánica, FQIQ, Universidad Nacional Mayor de San Marcos Lima, Perú; Linares F., T.; Departamento Académico de Operaciones Unitarias, FQIQ Universidad Nacional Mayor de San Marcos Lima, Perú; Otiniano C., M.; Departamento Académico de Operaciones Unitarias, FQIQ Universidad Nacional Mayor de San Marcos Lima, Perú; Armijo C., J.; Departamento Académico de Operaciones Unitarias, FQIQ Universidad Nacional Mayor de San Marcos Lima, Perú; Ugarte T., N.
2014-01-01
The objective of this work is the transformation of fructose to 4-hydroxymethylfurfural by a dehydratation process using the 4-toluenesulfonic acid as catalyst . The reaction was carried out using solutions of fructose in water and fructose in water-acetone (50% volume) in a batch reactor at temperatures of 372 K and 348 K respectively. The yield reached a maximum of 16% to hydroxymethylfurfural, an intermediate for obtaining fuel furan. El objetivo del presente trabajo es la transformació...
Dehydration of fructose to obtain hydroxymethylfurfural
Garrido Schaeffer, A.; Departamento Académico de Química Orgánica, FQIQ, Universidad Nacional Mayor de San Marcos Lima, Perú; Linares F., T.; Departamento Académico de Operaciones Unitarias, FQIQ Universidad Nacional Mayor de San Marcos Lima, Perú; Otiniano C., M.; Departamento Académico de Operaciones Unitarias, FQIQ Universidad Nacional Mayor de San Marcos Lima, Perú; Armijo C., J.; Departamento Académico de Operaciones Unitarias, FQIQ, Universidad Nacional Mayor de San Marcos Lima, Perú; Ugarte T., N.
2014-01-01
The objective of this work is the transformation of fructose to 4-hydroxymethylfurfural by a dehydratation process using the 4-toluenesulfonic acid as catalyst . The reaction was carried out using solutions of fructose in water and fructose in water-acetone (50% volume) in a batch reactor at temperatures of 372 K and 348 K respectively. The yield reached a maximum of 16% to hydroxymethylfurfural, an intermediate for obtaining fuel furan. El objetivo del presente trabajo es la transformació...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Lessons learned from planetary entry probe missions
Niemann, Hasso; Atreya, Sushil K.; Kasprzak, Wayne
Probing the atmospheres and surfaces of the planets and their moons with fast moving entry probes has been a very useful and essential technique to obtain in situ or quasi in situ scientific data (ground truth) which could not otherwise be obtained from fly by or orbiter only missions and where balloon, aircraft or lander missions are too complex and too costly. Planetary entry probe missions have been conducted successfully on Venus, Mars, Jupiter and Titan after having been first demonstrated in the Earth's atmosphere. Future planetary missions should also include more entry probe missions back to Venus and to the outer planets. The success of and science returns from past missions, the need for more and unique data, and a continuously advancing technology generate confidence that future missions will be even more successful with respect to science return and technical performance. There are, however, unique challenges associated with entry probe missions and with building instruments for an entry probe, as compared to orbiters, landers, or rovers. Conditions during atmospheric entry are extreme. There are operating time constraints due to the usually short duration of the probe descent, and the instruments experience rapid environmental changes in temperature and pressure. In addition, there are resource limitations, i.e. mass, power, size and bandwidth. Because of the protective heat shield and the high acceleration the probe experiences during entry, the ratio of payload to total probe mass is usually much smaller than in other missions. Finally, the demands on the instrument design are determined in large part by conditions (pressure, temperature, composition) unique to the particular body under study, and as a result, there is no one-size-fits-all instrument for an atmospheric probe. Many of these requirements are more easily met by miniaturizing the probe instrumentation and consequently reducing the required size of the probe. Improved heat shield
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Space weather Preparing for the Maximum of the Solar Cycle
Shaltout, Mosalam
: Space Environments Group preparing for the maximum of the solar cycle 24 where the current plan envisage that the National second Earth Research satellite EgyptSat2 will be launched in 2012. For that, forecasting the solar activity at 2012 is very important. The plan depend on the long-term prediction by using 10.7cm of Ottawa data (1947-2008) and applying fast Fourier transform FFT on this time series. Also, Using the Artificial Intelligence to predict the maximum activity by Fuzzy modeling. Also, Short-term prediction for Coronal mass ejection CMEs by the artificial satellite STEREO observations, beside other satellites as SOHO, Hinde, SDO, Solar orbiter sentinels, Solar Probe in collaboration with Paris Observatory in Meudon, France.
Gault, Baptiste; Moody, Michael P; Cairney, Julie M; Ringer, Simon P
2012-01-01
This review addresses new developments in the emerging area of "atom probe crystallography", a materials characterization tool with the unique capacity to reveal both composition and crystallographic...
Chemical Address Tags of Fluorescent Bioimaging Probes
Shedden, Kerby; Rosania, Gus R.
2010-01-01
Chemical address tags can be defined as specific structural features shared by a set of bioimaging probes having a predictable influence on cell-associated visual signals obtained from these probes. Here, using a large image dataset acquired with a high content screening instrument, machine vision and cheminformatics analysis have been applied to reveal chemical address tags. With a combinatorial library of fluorescent molecules, fluorescence signal intensity, spectral, and spatial features c...
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Langmuir-Probe Measurements in Flowing-Afterglow Plasmas
Johnsen, R.; Shunko, E. V.; Gougousi, T.; Golde, M. F.
1994-01-01
The validity of the orbital-motion theory for cylindrical Langmuir probes immersed in flowing- afterglow plasmas is investigated experimentally. It is found that the probe currents scale linearly with probe area only for electron-collecting but not for ion-collecting probes. In general, no agreement is found between the ion and electron densities derived from the probe currents. Measurements in recombining plasmas support the conclusion that only the electron densities derived from probe measurements can be trusted to be of acceptable accuracy. This paper also includes a brief derivation of the orbital-motion theory, a discussion of perturbations of the plasma by the probe current, and the interpretation of plasma velocities obtained from probe measurements.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Electron studies of acceleration processes in the corona. [solar probe mission planning
Lin, R. P.
1978-01-01
The solar probe mission can obtain unique and crucially important measurements of electron acceleration, storage, and propagation processes in the corona and can probe the magnetic field structure of the corona below the spacecraft. The various energetic electron phenomena which will be sampled by the Solar Probe are described and some new techniques to probe coronal structures are suggested.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
Pioneer Jupiter orbiter probe mission 1980, probe description
Defrees, R. E.
1974-01-01
The adaptation of the Saturn-Uranus Atmospheric Entry Probe (SUAEP) to a Jupiter entry probe is summarized. This report is extracted from a comprehensive study of Jovian missions, atmospheric model definitions and probe subsystem alternatives.
Comparative analyses of plasma probe diagnostics techniques
Godyak, V. A. [Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, Michigan 48109, USA and RF Plasma Consulting, Brookline, Massachusetts 02446 (United States); Alexandrovich, B. M. [Plasma Sensors, Brookline, Massachusetts 02446 (United States)
2015-12-21
The subject of this paper is a comparative analysis of the plasma parameters inferred from the classical Langmuir probe procedure, from different theories of the ion current to the probe, and from measured electron energy distribution function (EEDF) obtained by double differentiation of the probe characteristic. We concluded that the plasma parameters inferred from the classical Langmuir procedure can be subjected to significant inaccuracy due to the non-Maxwellian EEDF, uncertainty of locating the plasma potential, and the arbitrariness of the ion current approximation. The plasma densities derived from the ion part of the probe characteristics diverge by as much as an order of magnitude from the density calculated according to Langmuir procedure or calculated as corresponding integral of the measured EEDF. The electron temperature extracted from the ion part is always subjected to uncertainty. Such inaccuracy is attributed to modification of the EEDF for fast electrons due to inelastic electron collisions, and to deficiencies in the existing ion current theories; i.e., unrealistic assumptions about Maxwellian EEDFs, underestimation of the ion collisions and the ion ambipolar drift, and discounting deformation of the one-dimensional structure of the region perturbed by the probe. We concluded that EEDF measurement is the single reliable probe diagnostics for the basic research and industrial applications of highly non-equilibrium gas discharge plasmas. Examples of EEDF measurements point up importance of examining the probe current derivatives in real time and reiterate significance of the equipment technical characteristics, such as high energy resolution and wide dynamic range.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Materials applications of an advanced 3-dimensional atom probe
Cerezo, A. [Oxford Univ. (United Kingdom). Dept. of Materials; Gibuoin, D. [Oxford Univ. (United Kingdom). Dept. of Materials; Kim, S. [Oxford Univ. (United Kingdom). Dept. of Materials; Sijbrandij, S.J. [Oxford Univ. (United Kingdom). Dept. of Materials; Venker, F.M. [Oxford Univ. (United Kingdom). Dept. of Materials]|[Rijksuniversiteit Groningen (Netherlands). Dept. of Applied Physics; Warren, P.J. [Oxford Univ. (United Kingdom). Dept. of Materials; Wilde, J. [Oxford Univ. (United Kingdom). Dept. of Materials; Smith, G.D.W. [Oxford Univ. (United Kingdom). Dept. of Materials
1996-09-01
An advanced 3-dimensional atom probe system has been constructed, based on an optical position-sensitive atom probe (OPoSAP) detector with energy compensation using a reflectron lens. The multi-hit detection capability of the OPoSAP leads to significant improvements in the efficiency of the instrument over the earlier serial position-sensing system. Further gains in efficiency are obtained by using a biassed grid in front of the detector to collect secondary electrons generated when ions strike the interchannel area. The improvement in detection efficiency gives enhanced performance in the studies of ordered materials and the determination of site occupation. Energy compensation leads to a much improved mass resolution (m/{Delta}m=500 full width at half maximum) making it possible to map out the 3-dimensional spatial distributions of all the elements in complex engineering alloys, even when elements lie close together in the mass spectrum. For example, in the analysis of a maraging steel, this allows separation between the {sup 61}Ni{sup 2+} and {sup 92}Mo{sup 3+} peaks, which are only 1/6 of a mass unit apart. (orig.).
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
ON A GENERALIZATION OF THE MAXIMUM ENTROPY THEOREM OF BURG
JOSÉ MARCANO
2017-01-01
Full Text Available In this article we introduce some matrix manipulations that allow us to obtain a version of the original Christoffel-Darboux formula, which is of interest in many applications of linear algebra. Using these developments matrix and Jensen’s inequality, we obtain the main result of this proposal, which is the generalization of the maximum entropy theorem of Burg for multivariate processes.
On the maximum backscattering cross section of passive linear arrays
Solymar, L.; Appel-Hansen, Jørgen
1974-01-01
The maximum backscattering cross section of an equispaced linear array connected to a reactive network and consisting of isotropic radiators is calculated forn = 2, 3, and 4 elements as a function of the incident angle and of the distance between the elements. On the basis of the results obtained...
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
On the query complexity of finding a local maximum point
Rastsvelaev, A.L.; Beklemishev, L.D.
2008-01-01
We calculate the minimal number of queries sufficient to find a local maximum point of a functiun on a discrete interval for a model with M parallel queries, M≥1. Matching upper and lower bounds are obtained. The bounds are formulated in terms of certain Fibonacci type sequences of numbers.
Restoration of GERIS Data Using the Maximum Noise Fractions Transform
Nielsen, Allan Aasbjerg; Larsen, Rasmus
1994-01-01
The Maximum Noise Fractions (MNF) transformation is used as a restoration tool in a 512512 subscene of a 63 channel spectral dataset recorded over the Pyrite Belt in Southern Spain with the Geophysical Environmental Research Imaging Spectrometer (GERIS). The data obtained from such a scanning...
Approaches for drug delivery with intracortical probes.
Spieth, Sven; Schumacher, Axel; Trenkle, Fabian; Brett, Olivia; Seidl, Karsten; Herwik, Stanislav; Kisban, Sebastian; Ruther, Patrick; Paul, Oliver; Aarts, Arno A A; Neves, Hercules P; Rich, P Dylan; Theobald, David E; Holtzman, Tahl; Dalley, Jeffrey W; Verhoef, Bram-Ernst; Janssen, Peter; Zengerle, Roland
2014-08-01
Intracortical microprobes allow the precise monitoring of electrical and chemical signaling and are widely used in neuroscience. Microelectromechanical system (MEMS) technologies have greatly enhanced the integration of multifunctional probes by facilitating the combination of multiple recording electrodes and drug delivery channels in a single probe. Depending on the neuroscientific application, various assembly strategies are required in addition to the microprobe fabrication itself. This paper summarizes recent advances in the fabrication and assembly of micromachined silicon probes for drug delivery achieved within the EU-funded research project NeuroProbes. The described fabrication process combines a two-wafer silicon bonding process with deep reactive ion etching, wafer grinding, and thin film patterning and offers a maximum in design flexibility. By applying this process, three general comb-like microprobe designs featuring up to four 8-mm-long shafts, cross sections from 150×200 to 250×250 µm², and different electrode and fluidic channel configurations are realized. Furthermore, we discuss the development and application of different probe assemblies for acute, semichronic, and chronic applications, including comb and array assemblies, floating microprobe arrays, as well as the complete drug delivery system NeuroMedicator for small animal research.
Novel laser contact probe for periodontal treatment
Watanabe, Hisashi; Kataoka, Kenzo; Ishikawa, Isao
2001-04-01
Application of the erbium: YAG laser to periodontal treatment has been attempted and preferable results have been reported for calculus removal, vaporization of granulation tissue, periodontal pocket sterilization and so on. However, it has been difficult to reach and treat some conditions involving complex root morphology and furcated rots with conventional probes. The new broom probe was designed and tested to overcome these obstacles. The probe was made of 20 super-fine optical fibers bound into a broom shape. The experiments were carried out to evaluate the destructive power of a single fiber and to examine the morphology of tissue destruction and the accessibility to a bifurcated root of a human tooth using the broom probe. The Er:YAG laser prototype was used. A flat specimen plate was made by cutting the root of a cow tooth and then attached to an electrically operated table and irradiated under various conditions. The specimens were examined with both an optical and scanning electron microscope. The irradiated surfaces were also examined with a roughness meter. An irradiation applied with a single fiber with an energy level of 1 to 1.5 mJ at its tip results in a destruction depth of 3 to 24 micrometers . The optimum conditions for the fibers of this probe was 1.0 mJ at 10 pps and a scanning speed of 100 mm/min. No part of the tooth surface remained un-irradiated after using the broom probe to cover the surface 5 times parallel to the tooth axis and then five times at a 30 degree angle to the previous irradiation at a power of 20 mJ at 10 pps. Also curved and irregular surface were destroyed to a maximum depth of 19 micrometers . In conclusion, these results suggest that the broom probe would be applicable for periodontal laser treatments even if the tooth surface has a complex and irregular shape.
An Extension of Chebyshev’s Maximum Principle to Several Variables
Meng Zhao-liang; Luo Zhong-xuan
2013-01-01
In this article, we generalize Chebyshev’s maximum principle to several variables. Some analogous maximum formulae for the special integration functional are given. A suﬃcient condition of the existence of Chebyshev’s maximum principle is also obtained.
An Ultrasonographic Periodontal Probe
Bertoncini, C. A.; Hinders, M. K.
2010-02-01
Periodontal disease, commonly known as gum disease, affects millions of people. The current method of detecting periodontal pocket depth is painful, invasive, and inaccurate. As an alternative to manual probing, an ultrasonographic periodontal probe is being developed to use ultrasound echo waveforms to measure periodontal pocket depth, which is the main measure of periodontal disease. Wavelet transforms and pattern classification techniques are implemented in artificial intelligence routines that can automatically detect pocket depth. The main pattern classification technique used here, called a binary classification algorithm, compares test objects with only two possible pocket depth measurements at a time and relies on dimensionality reduction for the final determination. This method correctly identifies up to 90% of the ultrasonographic probe measurements within the manual probe's tolerance.
2006-01-01
"The second international conference on hard and electromagnetic probes of high-energy nuclear collisions was held June 9 to 16, 2006 at the Asilomar Conference grounds in Pacific Grove, California" (photo and 1/2 page)
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
Maximum-entropy distributions of correlated variables with prespecified marginals.
Larralde, Hernán
2012-12-01
The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the "most unbiased" choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.
Generalized Relativistic Wave Equations with Intrinsic Maximum Momentum
Ching, Chee Leong
2013-01-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wavefunctions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential are stronger than vector potential. The energy spectrum of the systems studied are bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Generalized relativistic wave equations with intrinsic maximum momentum
Ching, Chee Leong; Ng, Wei Khim
2014-05-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wave functions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential is stronger than vector potential. The energy spectrum of the systems studied is bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Maximum efficiency of low-dissipation heat engines at arbitrary power
Holubec, Viktor; Ryabov, Artem
2016-07-01
We investigate maximum efficiency at a given power for low-dissipation heat engines. Close to maximum power, the maximum gain in efficiency scales as a square root of relative loss in power and this scaling is universal for a broad class of systems. For low-dissipation engines, we calculate the maximum gain in efficiency for an arbitrary fixed power. We show that engines working close to maximum power can operate at considerably larger efficiency compared to the efficiency at maximum power. Furthermore, we introduce universal bounds on maximum efficiency at a given power for low-dissipation heat engines. These bounds represent direct generalization of the bounds on efficiency at maximum power obtained by Esposito et al (2010 Phys. Rev. Lett. 105 150603). We derive the bounds analytically in the regime close to maximum power and for small power values. For the intermediate regime we present strong numerical evidence for the validity of the bounds.
FABRICATION AND APPLICATION OF NEARFIELD OPTICAL FIBRE PROBE
SUN JIA-LIN; XU JIAN-HUA; TIAN GUANG-YAN; GUo JI-HUA; ZHAO JUN; XIE AI-FANG; ZHANG ZE-BO
2001-01-01
In this paper, the fabrication of a large cone angle near-field optical fibre probe, using the two-step chemical etching method and bent probe, is introduced, and the controlling parameters of the coated Cr-Al film at the probe tip are presented. The scanning electron microscopy images display that the tip diameter of the uncoated large cone angle fibre probe obtained is less than 50nm, the cone angle over 90°, and the diameter of light aperture at the coated probe tip is less than 100nm. The measured results of the optical transmission efficiency for various probe tips show that the uncoated straight optical fibre probe, film-coated straight probe and film-coated bent probe are 3×10-1, 2×10-3, and l×10-4 times that of the flat fibre probe, respectively. In addition, the force images and near-field optical images of a standard sample are acquired using a large cone angle and film-coated bent probe.
Langmuir probe in collisionless and collisional plasma including dusty plasma
Bose, Sayak; Kaur, Manjit; Chattopadhyay, P. K.; Ghosh, J.; Saxena, Y. C.; Pal, R.
2017-04-01
Measurements of local plasma parameters in dusty plasma are crucial for understanding the physics issues related to such systems. The Langmuir probe, a small electrode immersed in the plasma, provides such measurements. However, designing of a Langmuir probe system in a dusty plasma environment demands special consideration. First, the probe has to be miniaturized enough so that its perturbation on the ambient dust structure is minimal. At the same time, the probe dimensions must be such that a well-defined theory exists for interpretation of its characteristics. The associated instrumentation must also support the measurement of current collected by the probe with high signal to noise ratio. The most important consideration, of course, comes from the fact that the probes are prone to dust contamination, as the dust particles tend to stick to the probe surface and alter the current collecting area in unpredictable ways. This article describes the design and operation of a Langmuir probe system that resolves these challenging issues in dusty plasma. In doing so, first, different theories that are used to interpret the probe characteristics in collisionless as well as in collisional regimes are discussed, with special emphasis on application. The critical issues associated with the current-voltage characteristics of Langmuir probe obtained in different operating regimes are discussed. Then, an algorithm for processing these characteristics efficiently in presence of ion-neutral collisions in the probe sheath is presented.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Probing ionospheric structure using LOFAR data
Mevius, M.; Van Der Tol, S.; Pandey, V. N.
2015-01-01
To obtain high quality images with the Lofar low frequency radio telescope, accurate ionospheric characterization and calibration is essential. The large field of view of LOFAR (several 10s of square degrees) requires good knowledge of the spatial variation of the ionosphere. In this work to probe t
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Minimum disturbance rewards with maximum possible classical correlations
Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)
2017-07-12
Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.
Field emission sensing for non-contact probe recording
Febre, le Alexander Jonathan
2008-01-01
In probe recording an array of thousands of nanometer-sharp probes is used to write and read on a storage medium. By using micro-electromechanical system technology (MEMS) for fabrication, small form factor memories with high data density and low power consumption can be obtained. Such a system is e
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Automated design of genomic Southern blot probes
Komiyama Noboru H
2010-01-01
Full Text Available Abstract Background Sothern blotting is a DNA analysis technique that has found widespread application in molecular biology. It has been used for gene discovery and mapping and has diagnostic and forensic applications, including mutation detection in patient samples and DNA fingerprinting in criminal investigations. Southern blotting has been employed as the definitive method for detecting transgene integration, and successful homologous recombination in gene targeting experiments. The technique employs a labeled DNA probe to detect a specific DNA sequence in a complex DNA sample that has been separated by restriction-digest and gel electrophoresis. Critically for the technique to succeed the probe must be unique to the target locus so as not to cross-hybridize to other endogenous DNA within the sample. Investigators routinely employ a manual approach to probe design. A genome browser is used to extract DNA sequence from the locus of interest, which is searched against the target genome using a BLAST-like tool. Ideally a single perfect match is obtained to the target, with little cross-reactivity caused by homologous DNA sequence present in the genome and/or repetitive and low-complexity elements in the candidate probe. This is a labor intensive process often requiring several attempts to find a suitable probe for laboratory testing. Results We have written an informatic pipeline to automatically design genomic Sothern blot probes that specifically attempts to optimize the resultant probe, employing a brute-force strategy of generating many candidate probes of acceptable length in the user-specified design window, searching all against the target genome, then scoring and ranking the candidates by uniqueness and repetitive DNA element content. Using these in silico measures we can automatically design probes that we predict to perform as well, or better, than our previous manual designs, while considerably reducing design time. We went on to
The Galileo Probe: How it Has Changed Our Understanding of Jupiter
Young, Richard E.
2003-01-01
The Galileo Mission to Jupiter, which arrived in December of 1995, provided the first study by an orbiter, and the first in-situ sampling via an entry probe, of an outer planet atmosphere. The rationale for an entry probe is that, even from an orbiter, remote sensing of the jovian atmosphere could not adequately retrieve the information desired. This paper provides a current summary of the most significant aspects of the data returned from the Galileo entry probe. As a result of the probe measurements, there has been a reassessment of our understanding of outer planet formation and evolution of the solar system. The primary scientific objective of the Galileo probe was to determine the composition of the jovian atmosphere, which from remote sensing remained either very uncertain, or completely unknown, with respect to several key elements. The probe found that the global He mass fraction is. significantly above the value reported from the Voyager Jupiter flybys but is slightly below the protosolar value, implying that there has been some settling of He to the deep jovian interior. The probe He measurements have also led to a reevaluation of the Voyager He mass fraction for Saturn, which is now determined to be much closer to that of Jupiter. The elements C, N, S, Ar, Kr, Xe were all found to have global abundances approximately 3 times their respective solar abundances. This result has raised a number of fundamental issues with regard to properties of planetesimals and the solar nebula at the time of giant planet formation. Ne, on the other hand, was found to be highly depleted, probably as the result of it being carried along with helium as helium settles towards the deep interior. The global abundance of O was not obtained by the probe because of the influence of local processes at the probe entry site (PES), processes which depleted condensible species, in this case H2O, well below condensation levels. Other condensible species, namely NH3 and H2S, were
An airborne icing characterization probe: nephelometer prototype
Roques, S.
2007-10-01
The aeronautical industry uses airborne probes to characterize icing conditions for flight certification purposes by counting and sizing cloud droplets. Existing probes have been developed for meteorologists in order to study cloud microphysics. They are used on specific aircraft, instrumented for this type of study, but are not adapted for an industrial flight test environment. The development by Airbus of a new probe giving a real time response for particle sizes between 10 and 500 µm, adapted to operational requirements, is in progress. An optical principle by coherent shadowgraphy with a low coherency point source is used for the application. The size of the droplets is measured from their shadows on a CCD. A pulsed laser coupled to a fast camera freezes the movement. Usually, image processing rejects out-of-focus objects. Here, particles far from the focal plane can be sized because of the large depth of field due to the point source. The technique used increases the depth of field and the sampled volume is enough to build a histogram even for low droplet concentrations. Image processing is done in real time and results are provided to the flight test engineer. All data and images are recorded in order to allow on-ground complementary analysis if necessary. A non-telescopic prototype has been tested in a wind tunnel and in flight. The definitive probe being retractable is designed to be easily installed through a dummy window. Retracted, it will allow the aircraft to fly at VMO (maximum operating limit speed).
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Atmospheric trident production for probing new physics
Shao-Feng Ge
2017-09-01
Full Text Available We propose to use atmospheric neutrinos as a powerful probe of new physics beyond the Standard Model via neutrino trident production. The final state with double muon tracks simultaneously produced from the same vertex is a distinctive signal at large Cherenkov detectors. We calculate the expected event numbers of trident production in the Standard Model. To illustrate the potential of this process to probe new physics we obtain the sensitivity on new vector/scalar bosons with coupling to muon and tau neutrinos.
Garcia Bueno, A.; Jimenez Garcia, J. J.; Hernandez Estrada, R.; Mendez Canete, M.
2012-07-01
The appearance of the degradation phenomenon called denting tubes in some models of GV, and the need to characterize this type of information and its consequences on the one hand, and to optimize the inspection times on the other, has shown the desirability of develop new probes that integrate different inspection techniques to obtain the maximum information from the degradation phenomena, the minimum inspection times.
Calibration of the Shower Maximum Detector in the Barrel EMC at STAR
Farnsworth, Kara; Mioduszewski, Saskia; Codrington, Martin
2009-05-01
Because of a photon's lack of interaction with the quark-gluon plasma (QGP), the γ-jet process (in which a direct photon is produced back to back with a jet) is a good probe of the medium. However, background photons, like those from π^0 decay must be factored in to the analysis. To distinguish between these direct and decay photons, a well calibrated detector is needed. The Barrel Shower Maximum Detector (BSMD) in the Barrel Electromagnetic Calorimeter (BEMC) at STAR has high resolution, but has not been calibrated well enough to discriminate between these two events. A pedestal subtraction was performed on the raw ADC vs. strip ID data from a Au+Au 200 GeV run. Each strip in both (pseudorapidity) and η (azimuth) was then assigned a status identification number, each corresponding to a hot, cold, dead, or good channel, for quality assurance. By finding the gains for each strip and normalizing them, calibration constants were obtained which can be applied to future runs. This accomplished a relative calibration of the BSMD.
Probing Magnetic Fields Near the Base of the Convection Zone with Meridional Flows
CHOU, DEAN-YI
2017-08-01
We study the solar-cycle variations of the meridional flows near the base of the convection zone to probe the solar-cycle variations of magnetic fields. Using SOHO/MDI data, we measure the acoustic travel-time difference on the meridional plane for different latitudes and different travel distances over 15 years, including two minima and one maximum. The measured travel-time differences averaged over two minima are similar, but significantlydifferent from that at the maximum. The measured travel-time difference is inverted to obtain the meridional flow at the minimum and maximum. The flow at the minimum has a two-cell pattern in the convection zone: poleward flow in the upper layer (above 0.86R), equator-ward flow in the mid-layer (0.74-0.86R), and poleward flow again in the lower layer (below 0.74R). The two-cell pattern is changed to a more complicated pattern at the maximum. The active latitudes appear to play a key role in the changes.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
An improved probe noise approach for acoustic feedback cancellation
Guo, Meng; Jensen, Søren Holdt; Jensen, Jesper
2012-01-01
-state error of the adaptive algorithm in a multiple-microphone and single-loudspeaker audio system. This is obtained through a specifically designed probe noise signal and a corresponding probe noise enhancement strategy. We show the effects of the proposed probe noise approach by deriving analytical......The perhaps most challenging problem in acoustic feedback cancellation using adaptive filters is the bias problem. It is well-known that a probe noise approach can effectively prevent this problem. However, when the probe noise must be inaudible and the steady-state error of the adaptive filter...... must be unchanged, this approach causes a significantly decreased convergence rate of the adaptive filter, and might thereby be less useful in practical applications. In this work, we propose a new probe noise approach which significantly increases the convergence rate while maintaining the steady...
Fatigue Life Analysis of Cantilever Probe on Wafer Test
Hsiao Te-Ching
2016-01-01
Full Text Available This research utilizes the finite element analysis software (ANSYS to stimulate the different probe material quality (tungsten, SUS304 stainless steel, SUS316L stainless steel and SKD11 tool steel, respectively during wafer tests. Under a room temperature of (25°C, the stress and fatigue life (cycles of probing test of the cantilever probe were measured with an OverDriver (OD of 20µm, 40µm, 50µm, 60µm and 80µm, respectively. First, to obtain the magnitude of pinpoint shift of the probe under wafer test and the OverDriver is 50µm. And, calculate the fatigue life of the probe. Then, a probe model with the same characteristics as the experiment is created and the probe fatigue life analyzed with the ANSYS. After the reliability of the model is ascertained, the wafer tests of different probe materials are stimulated under different OverDriver circumstances to calculate its stress and fatigue life. The results indicate that the greatest stress measured during the wafer test of the tungsten, SUS304 stainless steel, SUS316L stainless steel and SKD11 tool steel cantilever probe are all smaller than the yield strength, and the fatigue life could reach over one hundred K cycles. When catalogued by the cantilever probe fatigue life during one hundred K cycles, the life span, in order, is tungsten < SUS316L stainless steel < SUS304 stainless steel < SKD11 tool steel.
Citron, Z; The ATLAS collaboration
2014-01-01
The ATLAS collaboration has measured several hard probe observables in Pb+Pb and p+Pb collisions at the LHC. These measurements include jets which show modification in the hot dense medium of heavy ion collisions as well as color neutral electro-weak bosons. Together, they elucidate the nature of heavy ion collisions.
Endocavity Ultrasound Probe Manipulators.
Stoianovici, Dan; Kim, Chunwoo; Schäfer, Felix; Huang, Chien-Ming; Zuo, Yihe; Petrisor, Doru; Han, Misop
2013-06-01
We developed two similar structure manipulators for medical endocavity ultrasound probes with 3 and 4 degrees of freedom (DoF). These robots allow scanning with ultrasound for 3-D imaging and enable robot-assisted image-guided procedures. Both robots use remote center of motion kinematics, characteristic of medical robots. The 4-DoF robot provides unrestricted manipulation of the endocavity probe. With the 3-DoF robot the insertion motion of the probe must be adjusted manually, but the device is simpler and may also be used to manipulate external-body probes. The robots enabled a novel surgical approach of using intraoperative image-based navigation during robot-assisted laparoscopic prostatectomy (RALP), performed with concurrent use of two robotic systems (Tandem, T-RALP). Thus far, a clinical trial for evaluation of safety and feasibility has been performed successfully on 46 patients. This paper describes the architecture and design of the robots, the two prototypes, control features related to safety, preclinical experiments, and the T-RALP procedure.
Östlin, Anna; Pagh, Rasmus
2002-01-01
We consider dictionaries that perform lookups by probing a single word of memory, knowing only the size of the data structure. We describe a randomized dictionary where a lookup returns the correct answer with probability 1 - e, and otherwise returns don't know. The lookup procedure uses an expan...
Wilkinson, John
2013-01-01
Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…
Östlin, Anna; Pagh, Rasmus
2002-01-01
We consider dictionaries that perform lookups by probing a single word of memory, knowing only the size of the data structure. We describe a randomized dictionary where a lookup returns the correct answer with probability 1 - e, and otherwise returns don't know. The lookup procedure uses an expan...
Wilkinson, John
2013-01-01
Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Drugs obtained by biotechnology processing
Hugo de Almeida; Maria Helena Amaral; Paulo Lobão
2011-01-01
In recent years, the number of drugs of biotechnological origin available for many different diseases has increased exponentially, including different types of cancer, diabetes mellitus, infectious diseases (e.g. AIDS Virus / HIV) as well as cardiovascular, neurological, respiratory, and autoimmune diseases, among others. The pharmaceutical industry has used different technologies to obtain new and promising active ingredients, as exemplified by the fermentation technique, recombinant DNA tec...
Truncated States Obtained by Iteration
W.B.Cardoso; N.G.de Almeida
2008-01-01
We introduce the concept of truncated states obtained via iterative processes(TSI)and study its statistical features,making an analogy with dynamical systems theory(DST).As a specific example,we have studied TSI for the doubring and the logistic functions,which are standard functions in studying chaos.TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
EDITORIAL: Probing the nanoworld Probing the nanoworld
Miles, Mervyn
2009-10-01
In nanotechnology, it is the unique properties arising from nanometre-scale structures that lead not only to their technological importance but also to a better understanding of the underlying science. Over the last twenty years, material properties at the nanoscale have been dominated by the properties of carbon in the form of the C60 molecule, single- and multi-wall carbon nanotubes, nanodiamonds, and recently graphene. During this period, research published in the journal Nanotechnology has revealed the amazing mechanical properties of such materials as well as their remarkable electronic properties with the promise of new devices. Furthermore, nanoparticles, nanotubes, nanorods, and nanowires from metals and dielectrics have been characterized for their electronic, mechanical, optical, chemical and catalytic properties. Scanning probe microscopy (SPM) has become the main characterization technique and atomic force microscopy (AFM) the most frequently used SPM. Over the past twenty years, SPM techniques that were previously experimental in nature have become routine. At the same time, investigations using AFM continue to yield impressive results that demonstrate the great potential of this powerful imaging tool, particularly in close to physiological conditions. In this special issue a collaboration of researchers in Europe report the use of AFM to provide high-resolution topographical images of individual carbon nanotubes immobilized on various biological membranes, including a nuclear membrane for the first time (Lamprecht C et al 2009 Nanotechnology 20 434001). Other SPM developments such as high-speed AFM appear to be making a transition from specialist laboratories to the mainstream, and perhaps the same may be said for non-contact AFM. Looking to the future, characterisation techniques involving SPM and spectroscopy, such as tip-enhanced Raman spectroscopy, could emerge as everyday methods. In all these advanced techniques, routinely available probes will
Probing the String Winding Sector
Aldazabal, Gerardo; Nuñez, Carmen
2016-01-01
We probe a slice of the massive winding sector of bosonic string theory from toroidal compactifications of Double Field Theory (DFT). This string subsector corresponds to states containing one left and one right moving oscillators. We perform a generalized Kaluza Klein compactification of DFT on generic $2n$-dimensional toroidal constant backgrounds and show that, up to third order in fluctuations, the theory coincides with the corresponding effective theory of the bosonic string compactified on $n$-dimensional toroidal constant backgrounds, obtained from three-point amplitudes. The comparison between both theories is facilitated by noticing that generalized diffeomorphisms in DFT allow to fix generalized harmonic gauge conditions that help in identifying the physical degrees of freedom. These conditions manifest as conformal anomaly cancellation requirements on the string theory side. The explicit expression for the gauge invariant effective action containing the physical massless sector (gravity+antisymmetr...
Sérgio Luiz Gomes Antunes
2012-03-01
Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes
2012-03-01
Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
2010-01-01
... any flight safety system on the vehicle, including a description of operations and component location... vehicle (on-range, off-range, and down-range) and specific, unique facilities exposed to risk. Scenarios... in the license application. C. On-orbit risk analysis assessing risks posed by a launch vehicle to...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Calibration Fixture For Anemometer Probes
Lewis, Charles R.; Nagel, Robert T.
1993-01-01
Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.
Truncated states obtained by iteration
Cardoso, W B
2007-01-01
Quantum states of the electromagnetic field are of considerable importance, finding potential application in various areas of physics, as diverse as solid state physics, quantum communication and cosmology. In this paper we introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST. A general method to engineer TSI in the running-wave domain is employed, which includes the errors due to the nonidealities of detectors and photocounts.
Raman spectroscopy system with hollow fiber probes
Liu, Bing-hong; Shi, Yi-Wei
2012-11-01
A Raman remote spectroscopy system was realized using flexible hollow optical fiber as laser emittion and signal collection probes. A silver-coated hollow fiber has low-loss property and flat transmission characteristics in the visible wavelength regions. Compared with conventional silica optical fiber, little background fluorescence noise was observed with optical fiber as the probe, which would be of great advantages to the detection in low frequency Raman shift region. The complex filtering and focusing system was thus unnecessary. The Raman spectra of CaCO3 and PE were obtained by using the system and a reasonable signal to noise ratio was attained without any lens. Experiments with probes made of conventional silica optical fibers were also conducted for comparisons. Furthermore, a silver-coated hollow glass waveguide was used as sample cell to detect liquid phase sample. We used a 6 cm-long hollow fiber as the liquid cell and Butt-couplings with emitting and collecting fibers. Experiment results show that the system obtained high signal to noise ratio because of the longer optical length between sample and laser light. We also give the elementary theoretical analysis for the hollow fiber sample cell. The parameters of the fiber which would affect the system were discussed. Hollow fiber has shown to be a potential fiber probe or sample cell for Raman spectroscopy.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Probing properties of cold radiofrequency plasma with polymer probe
Bormashenko, E.; Chaniel, G.; Multanen, V.
2015-01-01
The probe intended for the characterization of cold plasma is introduced. The probe allows the estimation of Debye length of cold plasma. The probe is based on the pronounced modification of surface properties (wettability) of polymer films by cold plasmas. The probe was tested with the cold radiofrequency inductive air plasma discharge. The Debye length and the concentration of charge carriers were estimated for various gas pressures. The reported results coincide reasonably with the corresponding values established by other methods. The probe makes possible measurement of characteristics of cold plasmas in closed chambers.
Probing Properties of Cold Radiofrequency Plasma with Polymer Probe
Bormashenko, Edward; Multanen, Victor
2014-01-01
The probe intended for the characterization of cold plasma is introduced. The probe allows estimation of the Debye length of the cold plasma. The probe is based on the pronounced modification of surface properties (wettability) of polymer films by cold plasmas. The probe was tested with the cold radiofrequency inductive air plasma discharge. The Debye length and the concentration of charge carriers were estimated for various gas pressures. The reported results coincide reasonably with the corresponding values established by other methods. The probe makes possible measurement of characteristics of cold plasmas in closed chambers.
2008-01-01
This image taken by the Surface Stereo Imager on Sol 49, or the 49th Martian day of the mission (July 14, 2008), shows thermal and electrical conductivity probe on NASA's Phoenix Mars Lander's Robotic Arm. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.
Alfred Cerezo; Peter H. Clifton; Mark J. Galtrey; Humphreys, Colin J.; Kelly, Thomas. F.; David J. Larson; Sergio Lozano-Perez; Marquis, Emmanuelle A.; Oliver, Rachel A.; Gang Sha; Keith Thompson; Mathijs Zandbergen; Roger L. Alvis
2007-01-01
This review aims to describe and illustrate the advances in the application of atom probe tomography that have been made possible by recent developments, particularly in specimen preparation techniques (using dual-beam focused-ion beam instruments) but also of the more routine use of laser pulsing. The combination of these two developments now permits atomic-scale investigation of site-specific regions within engineering alloys (e.g. at grain boundaries and in the vicinity of cracks) and also...
Suppression of reflections by directive probes in spherical near-field measurements
Hansen, Per Christian; Larsen, Flemming H.
1984-01-01
The influence of probe correction in spherical near-field measurements on signals from outside the test volume is investigated theoretically and experimentally. It is found that the suppression of reflections obtained by a directive probe is not disturbed by the probe correction. A geometric rela...
Einstein Inflationary Probe (EIP)
Hinshaw, Gary
2004-01-01
I will discuss plans to develop a concept for the Einstein Inflation Probe: a mission to detect gravity waves from inflation via the unique signature they impart to the cosmic microwave background (CMB) polarization. A sensitive CMB polarization satellite may be the only way to probe physics at the grand-unified theory (GUT) scale, exceeding by 12 orders of magnitude the energies studied at the Large Hadron Collider. A detection of gravity waves would represent a remarkable confirmation of the inflationary paradigm and set the energy scale at which inflation occurred when the universe was a fraction of a second old. Even a strong upper limit to the gravity wave amplitude would be significant, ruling out many common models of inflation, and pointing to inflation occurring at much lower energy, if at all. Measuring gravity waves via the CMB polarization will be challenging. We will undertake a comprehensive study to identify the critical scientific requirements for the mission and their derived instrumental performance requirements. At the core of the study will be an assessment of what is scientifically and experimentally optimal within the scope and purpose of the Einstein Inflation Probe.
Yanan Yue
2012-03-01
Full Text Available Nanoscale novel devices have raised the demand for nanoscale thermal characterization that is critical for evaluating the device performance and durability. Achieving nanoscale spatial resolution and high accuracy in temperature measurement is very challenging due to the limitation of measurement pathways. In this review, we discuss four methodologies currently developed in nanoscale surface imaging and temperature measurement. To overcome the restriction of the conventional methods, the scanning thermal microscopy technique is widely used. From the perspective of measuring target, the optical feature size method can be applied by using either Raman or fluorescence thermometry. The near-field optical method that measures nanoscale temperature by focusing the optical field to a nano-sized region provides a non-contact and non-destructive way for nanoscale thermal probing. Although the resistance thermometry based on nano-sized thermal sensors is possible for nanoscale thermal probing, significant effort is still needed to reduce the size of the current sensors by using advanced fabrication techniques. At the same time, the development of nanoscale imaging techniques, such as fluorescence imaging, provides a great potential solution to resolve the nanoscale thermal probing problem.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Effects of bruxism on the maximum bite force
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Lin, Tsung-I.; Jovanovic, Misa V.; Dowben, Robert M.
1989-06-01
Absorption and fluorescence spectroscopic studies are reported here for nine new fluorescent probes recently synthesized in our laboratories: four pyrene derivatives with substituents of (i) 1,3-diacetoxy-6,8-dichlorosulfonyl, (ii) 1,3-dihydroxy-6,8-disodiumsulfonate, (iii) 1,3-disodiumsulfonate, and (iv) l-ethoxy-3,6,8-trisodiumsulfonate groups, and five [7-julolidino] coumarin derivatives with substituents of (v) 3-carboxylate-4-methyl, (vi) 3- methylcarboxylate, (vii) 3-acetate-4-methyl, (viii) 3-propionate-4-methyl, and (ix) 3-sulfonate-4-methyl groups. Pyrene compounds i and ii and coumarin compounds v and vi exhibit interesting absorbance and fluorescence properties: their absorption maxima are red shifted compared to the parent compound to the blue-green region, and the band width broadens considerably. All four blue-absorbing dyes fluoresce intensely in the green region, and the two pyrene compounds emit at such long wavelengths without formation of excimers. The fluorescence properties of these compounds are quite environment-sensitive: considerable spectral shifts and fluorescence intensity changes have been observed in the pH range from 3 to 10 and in a wide variety of polar and hydrophobic solvents with vastly different dielectric constants. The high extinction and fluorescence quantum yield of these probes make them ideal fluorescent labeling reagents for proteins, antibodies, nucleic acids, and cellular organelles. The pH and hydrophobicity-dependent fluorescence changes can be utilized as optical pH and/or hydrophobicity indicators for mapping environmental difference in various cellular components in a single cell. Since all nine probes absorb in the UV, but emit at different wavelengths in the visible, these two groups of compounds offer an advantage of utilizing a single monochromatic light source (e.g., a nitrogen laser) to achieve multi-wavelength detection for flow cytometry application. As a first step to explore potential application in
A virtual optical probe based on evanescent wave interference
孙利群; 王佳; 洪涛; 田芊
2002-01-01
A virtual probe is a novel immaterial tip based on the near-field evanescent wave interference and small aperture diffraction, which can be used in near-field high-density optical data storage, nano-lithography, near-field optical imaging and spectral detection, near-field optical manipulation of nano-scale specimen, etc. In this paper, the formation mechanism of the virtual probe is analysed, the evanescent wave interference discussed theoretically, andthe sidelobe suppression by small aperture is simulated by the three-dimensional finite-difference time-domain method The simulation results of the optical distribution of the near-field virtual probe reveal that the transmission efficiencyof the virtual probe is 102-104 times higher than that of the nano-aperture metal-coated fibre probe widely used in near-field optical systems. The full width at half maximum of the peak, in other words, the size of virtual probe, is constant whatever the distance in a certain range so that the critical nano-separation control in the near-field system can be relaxed. We give an example of the application of the virtual probe in ultrahigh-density optical data storage.
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Drugs obtained by biotechnology processing
Hugo Almeida
2011-06-01
Full Text Available In recent years, the number of drugs of biotechnological origin available for many different diseases has increased exponentially, including different types of cancer, diabetes mellitus, infectious diseases (e.g. AIDS Virus / HIV as well as cardiovascular, neurological, respiratory, and autoimmune diseases, among others. The pharmaceutical industry has used different technologies to obtain new and promising active ingredients, as exemplified by the fermentation technique, recombinant DNA technique and the hybridoma technique. The expiry of the patents of the first drugs of biotechnological origin and the consequent emergence of biosimilar products, have posed various questions to health authorities worldwide regarding the definition, framework, and requirements for authorization to market such products.Nos últimos anos, tem aumentado exponencialmente o número de fármacos de origem biotecnológica ao dispor das mais diversas patologias, entre elas destacam-se, os diferentes tipos de cancêr, as doenças infecciosas (ex. vírus AIDS/HIV, as doenças autoimunes, as doenças cardiovasculares, a Diabetes Mellitus, as doenças neurológicas, as doenças respiratórias, entre outras. A indústria farmacêutica tem recorrido a diferentes tecnologias para a obtenção de novos e promissores princípios ativos, como são exemplo a fermentação, a técnica de DNA Recombinante, a técnica de hidridoma, entre outras. A queda das patentes dos primeiros fármacos de origem biotecnológica e o consequente aparecimento dos produtos biossimilares têm colocado diferentes questões às autoridades de saúde mundiais, sobre a definição, enquadramento e exigências para a autorização de entrada no mercado deste tipo de produtos.
Development of Mackintosh Probe Extractor
Rahman, Noor Khazanah A.; Kaamin, Masiri; Suwandi, Amir Khan; Sahat, Suhaila; Jahaya Kesot, Mohd
2016-11-01
Dynamic probing is a continuous soil investigation technique, which is one of the simplest soil penetration test. It basically consist of repeatedly driving a metal tipped probe into the ground using a drop weight of fixed mass and travel. Testing was carried out continuously from ground level to the final penetration depth. Once the soil investigation work done, it is difficult to pull out the probe rod from the ground, due to strong soil structure grip against probe cone and prevent the probe rod out from the ground. Thus, in this case, a tool named Extracting Probe was created to assist in the process of retracting the probe rod from the ground. In addition, Extracting Probe also can reduce the time to extract the probe rod from the ground compare with the conventional method. At the same time, it also can reduce manpower cost because only one worker involve to handle this tool compare with conventional method used two or more workers. From experiment that have been done we found that the time difference between conventional tools and extracting probe is significant, average time difference is 155 minutes. In addition the extracting probe can reduce manpower usage, and also labour cost for operating the tool. With all these advantages makes this tool has the potential to be marketed.
Koelmans, Wabe W; Abelmann, L
2015-01-01
Probe-based data storage attracted many researchers from academia and industry, resulting in unprecendeted high data-density demonstrations. This topical review gives a comprehensive overview of the main contributions that led to the major accomplishments in probe-based data storage. The most investigated technologies are reviewed: topographic, phase-change, magnetic, ferroelectric and atomic and molecular storage. Also, the positioning of probes and recording media, the cantilever arrays and parallel readout of the arrays of cantilevers are discussed. This overview serves two purposes. First, it provides an overview for new researchers entering the field of probe storage, as probe storage seems to be the only way to achieve data storage at atomic densities. Secondly, there is an enormous wealth of invaluable findings that can also be applied to many other fields of nanoscale research such as probe-based nanolithography, 3D nanopatterning, solid-state memory technologies and ultrafast probe microscopy.
Maximum Power Point Tracking Based on Sliding Mode Control
Nimrod Vázquez
2015-01-01
Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
Optical and terahertz spectra analysis by the maximum entropy method.
Vartiainen, Erik M; Peiponen, Kai-Erik
2013-06-01
Phase retrieval is one of the classical problems in various fields of physics including x-ray crystallography, astronomy and spectroscopy. It arises when only an amplitude measurement on electric field can be made while both amplitude and phase of the field are needed for obtaining the desired material properties. In optical and terahertz spectroscopies, in particular, phase retrieval is a one-dimensional problem, which is considered as unsolvable in general. Nevertheless, an approach utilizing the maximum entropy principle has proven to be a feasible tool in various applications of optical, both linear and nonlinear, as well as in terahertz spectroscopies, where the one-dimensional phase retrieval problem arises. In this review, we focus on phase retrieval using the maximum entropy method in various spectroscopic applications. We review the theory behind the method and illustrate through examples why and how the method works, as well as discuss its limitations.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Efficiency at maximum power of thermally coupled heat engines.
Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph
2012-04-01
We study the efficiency at maximum power of two coupled heat engines, using thermoelectric generators (TEGs) as engines. Assuming that the heat and electric charge fluxes in the TEGs are strongly coupled, we simulate numerically the dependence of the behavior of the global system on the electrical load resistance of each generator in order to obtain the working condition that permits maximization of the output power. It turns out that this condition is not unique. We derive a simple analytic expression giving the relation between the electrical load resistance of each generator permitting output power maximization. We then focus on the efficiency at maximum power (EMP) of the whole system to demonstrate that the Curzon-Ahlborn efficiency may not always be recovered: The EMP varies with the specific working conditions of each generator but remains in the range predicted by irreversible thermodynamics theory. We discuss our results in light of nonideal Carnot engine behavior.
Optimal Control of Polymer Flooding Based on Maximum Principle
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
Probing the Higgs vacuum with general relativity
Mannheim, Philip D.; Kazanas, Demosthenes
1991-01-01
It is shown that the structure of the Higgs vacuum can be revealed in gravitational experiments which probe the Schwarzschild geometry to only one order in MG/r beyond that needed for the classical tests of general relativity. The possibility that deviations from the conventional geometry are at least theoretically conceivable is explored. The deviations obtained provide a diagnostic test for searching for the existence of macroscopic scalar fields and open up the possiblity for further exploring the Higgs mechanism.
On the 2m-variable symmetric Boolean functions with maximum algebraic immunity
QU LongJiang; LI Chao
2008-01-01
The properties of the 2m-variable symmetric Boolean functions with maximum al-gebraic immunity are studied in this paper. Their value vectors, algebraic normal forms, and algebraic degrees and weights are all obtained. At last, some necessary conditions for a symmetric Boolean function on even number variables to have maximum algebraic immunity are introduced.
Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2017-04-01
Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.
Monte Carlo modeling of ultrasound probes for image guided radiotherapy
Bazalova-Carter, Magdalena, E-mail: bazalova@uvic.ca [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 2Y2 (Canada); Schlosser, Jeffrey [SoniTrack Systems, Inc., Palo Alto, California 94304 (United States); Chen, Josephine [Department of Radiation Oncology, UCSF, San Francisco, California 94143 (United States); Hristov, Dimitre [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)
2015-10-15
Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm{sup 3}. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm{sup 2} beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm{sup 2} beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm{sup 3}, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The
Studies for obtaining a small holle, rapid edm drilling machine
Mihai Şimon
2011-12-01
Full Text Available This paper studies the obtaining of an experimental rapid drilling machine, through EDM process for small holes. The best parameters such as peak current, pulse frequency, duty factor and electrode rotation speed were studied for best machining characteristics. An electrolytic copper rod 0.8 mm diameter was selected as a tool electrode. The experiments generate output responses such as maximum material removal rate (MRR and the dependence with peak current, duty factor and Electrode rotation, parameters. Finally, parameters were optimized for maximum MRR with desired surface roughness value and used for sizing the component for a better small rapid drilling machine.
PROcess Based Diagnostics PROBE
Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.
2013-01-01
Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.
WAYS OF OBTAINING FINANCING BY TOUR OPERATORS
CARLAN ADRIANA
2015-12-01
Full Text Available domanin.Romania is a country with highly touristic potential that is not exploited to maximum. In order to reach a high quality level of tourism permanent development and modernization are needed and also the establishment of new businesses That conducts other activities other than those which takes place in our country. Ways of getting funds are multiple, depending on individual needs.To develop tourism activities it is necessary to require some funding that can come from various sources: auto-financing, loans from various banks or from third parties and grants offered by the European Union. There are many programs designed to support the development of tourism, such as ROP that allows people to access grants in order to implement projects for the establishment and the development of the activity in the touristic field. The purpose of this article is to highlight funding opportunities for the tourism operators and to assist them in choosing the appropriate form of financing of the current activity or the activity they want to implement in the future and description of how to obtain the necessary funds from various sources.
Alfred Cerezo
2007-12-01
Full Text Available This review aims to describe and illustrate the advances in the application of atom probe tomography that have been made possible by recent developments, particularly in specimen preparation techniques (using dual-beam focused-ion beam instruments but also of the more routine use of laser pulsing. The combination of these two developments now permits atomic-scale investigation of site-specific regions within engineering alloys (e.g. at grain boundaries and in the vicinity of cracks and also the atomic-level characterization of interfaces in multilayers, oxide films, and semiconductor materials and devices.
Chou, Aaron S.; /Fermilab
2009-10-01
Experimental searches for axions or axion-like particles rely on semiclassical phenomena resulting from the postulated coupling of the axion to two photons. Sensitive probes of the extremely small coupling constant can be made by exploiting familiar, coherent electromagnetic laboratory techniques, including resonant enhancement of transitions using microwave and optical cavities, Bragg scattering, and coherent photon-axion oscillations. The axion beam may either be astrophysical in origin as in the case of dark matter axion searches and solar axion searches, or created in the laboratory from laser interactions with magnetic fields. This note is meant to be a sampling of recent experimental results.
Kelly, Thomas F.; Larson, David J.
2012-08-01
In the world of tomographic imaging, atom probe tomography (APT) occupies the high-spatial-resolution end of the spectrum. It is highly complementary to electron tomography and is applicable to a wide range of materials. The current state of APT is reviewed. Emphasis is placed on applications and data analysis as they apply to many fields of research and development including metals, semiconductors, ceramics, and organic materials. We also provide a brief review of the history and the instrumentation associated with APT and an assessment of the existing challenges in the field.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
Domoshnitsky Alexander
2009-01-01
Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.
Maximum energy output of a DFIG wind turbine using an improved MPPT-curve method
Dinh-Chung Phan; Shigeru Yamamoto
2015-01-01
A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG) wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based o...
Efficient oligonucleotide probe selection for pan-genomic tiling arrays
Zhang Wei
2009-09-01
Full Text Available Abstract Background Array comparative genomic hybridization is a fast and cost-effective method for detecting, genotyping, and comparing the genomic sequence of unknown bacterial isolates. This method, as with all microarray applications, requires adequate coverage of probes targeting the regions of interest. An unbiased tiling of probes across the entire length of the genome is the most flexible design approach. However, such a whole-genome tiling requires that the genome sequence is known in advance. For the accurate analysis of uncharacterized bacteria, an array must query a fully representative set of sequences from the species' pan-genome. Prior microarrays have included only a single strain per array or the conserved sequences of gene families. These arrays omit potentially important genes and sequence variants from the pan-genome. Results This paper presents a new probe selection algorithm (PanArray that can tile multiple whole genomes using a minimal number of probes. Unlike arrays built on clustered gene families, PanArray uses an unbiased, probe-centric approach that does not rely on annotations, gene clustering, or multi-alignments. Instead, probes are evenly tiled across all sequences of the pan-genome at a consistent level of coverage. To minimize the required number of probes, probes conserved across multiple strains in the pan-genome are selected first, and additional probes are used only where necessary to span polymorphic regions of the genome. The viability of the algorithm is demonstrated by array designs for seven different bacterial pan-genomes and, in particular, the design of a 385,000 probe array that fully tiles the genomes of 20 different Listeria monocytogenes strains with overlapping probes at greater than twofold coverage. Conclusion PanArray is an oligonucleotide probe selection algorithm for tiling multiple genome sequences using a minimal number of probes. It is capable of fully tiling all genomes of a species on
Shi Jingtao; Wu Zhen
2011-01-01
A stochastic maximum principle for the risk-sensitive optimal control prob- lem of jump diffusion processes with an exponential-of-integral cost functional is derived assuming that the value function is smooth, where the diffusion and jump term may both depend on the control. The form of the maximum principle is similar to its risk-neutral counterpart. But the adjoint equations and the maximum condition heavily depend on the risk-sensitive parameter. As applications, a linear-quadratic risk-sensitive control problem is solved by using the maximum principle derived and explicit optimal control is obtained.
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
PV Maximum Power-Point Tracking by Using Artificial Neural Network
Farzad Sedaghati
2012-01-01
Full Text Available In this paper, using artificial neural network (ANN for tracking of maximum power point is discussed. Error back propagation method is used in order to train neural network. Neural network has advantages of fast and precisely tracking of maximum power point. In this method neural network is used to specify the reference voltage of maximum power point under different atmospheric conditions. By properly controling of dc-dc boost converter, tracking of maximum power point is feasible. To verify theory analysis, simulation result is obtained by using MATLAB/SIMULINK.
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
Three axis vector magnet set-up for cryogenic scanning probe microscopy
Galvis, J. A. [Laboratorio de Bajas Temperaturas, Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera, Condensed Matter Physics Center (IFIMAC), Facultad de Ciencias Universidad Autónoma de Madrid, 28049 Madrid (Spain); Departamento de Ciencias Naturales Facultad de Ingeniería Universidad Central, Bogotá (Colombia); Herrera, E.; Buendía, A. [Laboratorio de Bajas Temperaturas, Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera, Condensed Matter Physics Center (IFIMAC), Facultad de Ciencias Universidad Autónoma de Madrid, 28049 Madrid (Spain); Guillamón, I.; Vieira, S.; Suderow, H. [Laboratorio de Bajas Temperaturas, Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera, Condensed Matter Physics Center (IFIMAC), Facultad de Ciencias Universidad Autónoma de Madrid, 28049 Madrid (Spain); Unidad Asociada de Bajas Temperaturas y Altos Campos Magnéticos, UAM, CSIC, Cantoblanco, E-28049 Madrid (Spain); Azpeitia, J.; Luccas, R. F.; Munuera, C.; García-Hernandez, M. [Unidad Asociada de Bajas Temperaturas y Altos Campos Magnéticos, UAM, CSIC, Cantoblanco, E-28049 Madrid (Spain); Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid (Spain); and others
2015-01-15
We describe a three axis vector magnet system for cryogenic scanning probe microscopy measurements. We discuss the magnet support system and the power supply, consisting of a compact three way 100 A current source. We obtain tilted magnetic fields in all directions with maximum value of 5T along z-axis and of 1.2T for XY-plane magnetic fields. We describe a scanning tunneling microscopy-spectroscopy (STM-STS) set-up, operating in a dilution refrigerator, which includes a new high voltage ultralow noise piezodrive electronics and discuss the noise level due to vibrations. STM images and STS maps show atomic resolution and the tilted vortex lattice at 150 mK in the superconductor β-Bi{sub 2}Pd. We observe a strongly elongated hexagonal lattice, which corresponds to the projection of the tilted hexagonal vortex lattice on the surface. We also discuss Magnetic Force Microscopy images in a variable temperature insert.
In-flight calibration of mesospheric rocket plasma probes.
Havnes, Ove; Hartquist, Thomas W; Kassa, Meseret; Morfill, Gregor E
2011-07-01
Many effects and factors can influence the efficiency of a rocket plasma probe. These include payload charging, solar illumination, rocket payload orientation and rotation, and dust impact induced secondary charge production. As a consequence, considerable uncertainties can arise in the determination of the effective cross sections of plasma probes and measured electron and ion densities. We present a new method for calibrating mesospheric rocket plasma probes and obtaining reliable measurements of plasma densities. This method can be used if a payload also carries a probe for measuring the dust charge density. It is based on that a dust probe's effective cross section for measuring the charged component of dust normally is nearly equal to its geometric cross section, and it involves the comparison of variations in the dust charge density measured with the dust detector to the corresponding current variations measured with the electron and/or ion probes. In cases in which the dust charge density is significantly smaller than the electron density, the relation between plasma and dust charge density variations can be simplified and used to infer the effective cross sections of the plasma probes. We illustrate the utility of the method by analysing the data from a specific rocket flight of a payload containing both dust and electron probes.
A dual-cable noise reduction method for Langmuir probes
Yang, T. F.; Zu, Q. X.; Liu, Ping
1995-07-01
To obtain fast time response plasma properties, electron density and electron temperature, with a Langmuir probe, the applied probe voltage has to be swept at high frequency. Due to the RC characteristics of coaxial cables, an induced noise of a square-wave form will appear when a sawtooth voltage is applied to the probe. Such a noise is very annoying and difficult to remove, particularly when the probe signal is weak. This paper discusses a noise reduction method using a dual-cable circuit. One of the cables is active and the other is a dummy. Both of them are of equal length and are laid parallel to each other. The active cable carries the applied probe voltage and the probe current signal. The dummy one is not connected to the probe. After being carefully tuned, the induced noises from both cables are nearly identical and therefore can be effectively eliminated with the use of a differential amplifier. A clean I-V characteristic curve can thus be obtained. This greatly improves the accuracy and the time resolution of the values of ne and Te.
Maximum likelihood Jukes-Cantor triplets: analytic solutions.
Chor, Benny; Hendy, Michael D; Snir, Sagi
2006-03-01
Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.
An improved maximum power point tracking method for photovoltaic systems
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
The Antartic Ice Borehole Probe
Behar, A.; Carsey, F.; Lane, A.; Engelhardt, H.
2000-01-01
The Antartic Ice Borehole Probe mission is a glaciological investigation, scheduled for November 2000-2001, that will place a probe in a hot-water drilled hole in the West Antartic ice sheet. The objectives of the probe are to observe ice-bed interactions with a downward looking camera, and ice inclusions and structure, including hypothesized ice accretion, with a side-looking camera.
Effect of consolidation ratios on maximum dynamic shear modulus of sands
Yuan Xiaoming; Sun Jing; Sun Rui
2005-01-01
The dynamic shear modulus (DSM) is the most basic soil parameter in earthquake or other dynamic loading conditions and can be obtained through testing in the field or in the laboratory. The effect of consolidation ratios on the maximum DSM for two types of sand is investigated by using resonant column tests. And, an increment formula to obtain the maximum DSM for cases of consolidation ratio kc＞1 is presented. The results indicate that the maximum DSM rises rapidly when kc is near 1 and then slows down, which means that the power function of the consolidation ratio increment kc-1 can be used to describe the variation of the maximum DSM due to kc＞1. The results also indicate that the increase in the maximum DSM due to kc＞1 is significantly larger than that predicted by Hardin and Black's formula.
The accuracy of determining ion parameters by means of a cylindrical Langmuir probe
Georgieva, K. Ia.; Kirov, B. B.; Kraleva, L. Kh.
A method is presented whereby a cylindrical Langmuir probe can be used to obtain an estimate of the concentration distribution of two prevalent kinds of ions in space plasmas when their masses are known. In many cases, the use of a Langmuir probe can thus compensate for the absence of a mass spectrometer. The probe can also be used as a backup if the data obtained by other instruments are not dependable.
Niblock, T
2001-01-01
This thesis covers the design methodology, theory, modelling, fabrication and evaluation of a Micro-Scanning-Probe. The device is a thermally actuated bimorph quadrapod fabricated using Micro Electro Mechanical Systems technology. A quadrapod is a structure with four arms, in this case a planar structure with the four arms forming a cross which is dry etched out of a silicon diaphragm. Each arm has a layer of aluminium deposited on it forming a bimorph. Through heating each arm actuation is achieved in the plane of the quadrapod and the direction normal to it. Fabrication of the device has required the development of bulk micromachining techniques to handle post CMOS fabricated wafers and the patterning of thickly sputtered aluminium in bulk micro machined cavities. CMOS fabrication techniques were used to incorporate diodes onto the quadrapod arms for temperature measurement of the arms. Fine tungsten and silicon tips have also been fabricated to allow tunnelling between the tip and the platform at the centr...
Cosmological Probes for Supersymmetry
Maxim Khlopov
2015-05-01
Full Text Available The multi-parameter character of supersymmetric dark-matter models implies the combination of their experimental studies with astrophysical and cosmological probes. The physics of the early Universe provides nontrivial effects of non-equilibrium particles and primordial cosmological structures. Primordial black holes (PBHs are a profound signature of such structures that may arise as a cosmological consequence of supersymmetric (SUSY models. SUSY-based mechanisms of baryosynthesis can lead to the possibility of antimatter domains in a baryon asymmetric Universe. In the context of cosmoparticle physics, which studies the fundamental relationship of the micro- and macro-worlds, the development of SUSY illustrates the main principles of this approach, as the physical basis of the modern cosmology provides cross-disciplinary tests in physical and astronomical studies.
Cosmological Probes for Supersymmetry
Khlopov, Maxim
2015-01-01
The multi-parameter character of supersymmetric dark-matter models implies the combination of their experimental studies with astrophysical and cosmological probes. The physics of the early Universe provides nontrivial effects of non-equilibrium particles and primordial cosmological structures. Primordial black holes (PBHs) are a profound signature of such structures that may arise as a cosmological consequence of supersymmetric (SUSY) models. SUSY-based mechanisms of baryosynthesis can lead to the possibility of antimatter domains in a baryon asymmetric Universe. In the context of cosmoparticle physics, which studies the fundamental relationship of the micro- and macro-worlds, the development of SUSY illustrates the main principles of this approach, as the physical basis of the modern cosmology provides cross-disciplinary tests in physical and astronomical studies.
Nicolis, Alberto
2011-01-01
For relativistic quantum field theories, we consider Lorentz breaking, spatially homogeneous field configurations or states that evolve in time along a symmetry direction. We dub this situation "spontaneous symmetry probing" (SSP). We mainly focus on internal symmetries, i.e. on symmetries that commute with the Poincare group. We prove that the fluctuations around SSP states have a Lagrangian that is explicitly time independent, and we provide the field space parameterization that makes this manifest. We show that there is always a gapless Goldstone excitation that perturbs the system in the direction of motion in field space. Perhaps more interestingly, we show that if such a direction is part of a non-Abelian group of symmetries, the Goldstone bosons associated with spontaneously broken generators that do not commute with the SSP one acquire a gap, proportional to the SSP state's "speed". We outline possible applications of this formalism to inflationary cosmology.
Craig, Nathaniel; Englert, Christoph; McCullough, Matthew
2013-09-20
Any new scalar fields that perturbatively solve the hierarchy problem by stabilizing the Higgs boson mass also generate new contributions to the Higgs boson field-strength renormalization, irrespective of their gauge representation. These new contributions are physical, and in explicit models their magnitude can be inferred from the requirement of quadratic divergence cancellation; hence, they are directly related to the resolution of the hierarchy problem. Upon canonically normalizing the Higgs field, these new contributions lead to modifications of Higgs couplings that are typically great enough that the hierarchy problem and the concept of electroweak naturalness can be probed thoroughly within a precision Higgs boson program. Specifically, at a lepton collider this can be achieved through precision measurements of the Higgs boson associated production cross section. This would lead to indirect constraints on perturbative solutions to the hierarchy problem in the broadest sense, even if the relevant new fields are gauge singlets.
Voronka, N. R.; Block, B. P.; Carignan, G. R.
1991-01-01
The dynamic response of the MK-2 version of the Langmuir probe amplifier was studied. The settling time of the step response is increased by: (1) stray node-to-ground capacitance at series connections between high value feedback resistors; and (2) input capacitance due to the input cable, FET switches, and input source follower. The stray node-to-ground capacitances can be reduced to tolerable levels by elevating the string of feedback resistors above the printing board. A new feedback network was considered, with promising results. The design uses resistances having much lower nominal values, thereby minimizing the effect of stray capacitances. Faster settling times can be achieved by using an operational amplifier having a higher gain-bandwidth product.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM
Dongjie Li
2015-02-01
Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.
Toothbrush probe for instantaneous measurement of radial profile in tokamak boundary plasma
Uehara, Kazuya; Sengoku, Seio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Amemiya, Hiroshi
1997-04-01
A new probe for the instantaneous measurement of radial profiles of the boundary scrape-off layer (SOL) plasma has been developed in a tokamak. Five asymmetric double-probe chips are aligned in parallel to a strong magnetic field in the boundary plasma in a tokamak. This probe is named the `toothbrush probe` and can measure the ion temperature as well as the electron temperature and the plasma density in the SOL plasma within only one tokamak plasma shot. First, only one asymmetric probe is mounted on the divertor plate and it is tried to determine the ion temperature. Then, a manufactured toothbrush probe is mounted in the SOL plasma and the radial plasma profiles are simultaneously obtained. Data on the e-folding length of the plasma profile obtained by the toothbrush probe can determine the information on the transport properties such as the diffusion coefficient and the thermal conductivity of electrons and ions. (author)
Maximum Likelihood Position Location with a Limited Number of References
D. Munoz-Rodriguez
2011-04-01
Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
Maximum likelihood identification of aircraft stability and control derivatives
Mehra, R. K.; Stepner, D. E.; Tyler, J. S.
1974-01-01
Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.
Langmuir probe analysis in electronegative plasmas
Bredin, Jerome, E-mail: jerome.bredin@lpp.polytechnique.fr; Chabert, Pascal; Aanesland, Ane [Laboratoire de Physique des Plasmas, CNRS, Sorbonne Universités, UPMC Univ Paris 06, Univ Paris-Sud, Ecole Polytechnique, 91128 Palaiseau (France)
2014-12-15
This paper compares two methods to analyze Langmuir probe data obtained in electronegative plasmas. The techniques are developed to allow investigations in plasmas, where the electronegativity α{sub 0} = n{sub –}/n{sub e} (the ratio between the negative ion and electron densities) varies strongly. The first technique uses an analytical model to express the Langmuir probe current-voltage (I-V) characteristic and its second derivative as a function of the electron and ion densities (n{sub e}, n{sub +}, n{sub –}), temperatures (T{sub e}, T{sub +}, T{sub –}), and masses (m{sub e}, m{sub +}, m{sub –}). The analytical curves are fitted to the experimental data by adjusting these variables and parameters. To reduce the number of fitted parameters, the ion masses are assumed constant within the source volume, and quasi-neutrality is assumed everywhere. In this theory, Maxwellian distributions are assumed for all charged species. We show that this data analysis can predict the various plasma parameters within 5–10%, including the ion temperatures when α{sub 0} > 100. However, the method is tedious, time consuming, and requires a precise measurement of the energy distribution function. A second technique is therefore developed for easier access to the electron and ion densities, but does not give access to the ion temperatures. Here, only the measured I-V characteristic is needed. The electron density, temperature, and ion saturation current for positive ions are determined by classical probe techniques. The electronegativity α{sub 0} and the ion densities are deduced via an iterative method since these variables are coupled via the modified Bohm velocity. For both techniques, a Child-Law sheath model for cylindrical probes has been developed and is presented to emphasize the importance of this model for small cylindrical Langmuir probes.
Uehara, Kazuya; Kawakami, Tomohide [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Amemiya, Hiroshi; Hoethker, K.; Cosler, A.; Bieger, W.
1995-06-01
An ion diagnostic system using electrostatic probes for measurements in the JFT-2M tokamak boundary plasma has been developed under the collaboration program between KFA and JAERI. The rotating double probe system, on which the Hoethker double probe and Amemiya asymmetric probe can mounted, are manufactured at KFA workshop while the linear driver to support the rotating double probe, the ion toothbrush probe, the Katsumata probe and the cubic Mach probe are developed at JAERI. This report describes the hardware of this probe system for ion diagnostics in the boundary plasma and preliminary data obtained by means of this system. Furthermore, results on the transport are estimated on the basis of these probe data. (author).
Guillemot, Sylvain
2008-01-01
Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, elying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Generalized degeneracy, dynamic monopolies and maximum degenerate subgraphs
Zaker, Manouchehr
2012-01-01
A graph $G$ is said to be a $k$-degenerate graph if any subgraph of $G$ contains a vertex of degree at most $k$. Let $\\kappa$ be any non-negative function on the vertex set of $G$. We first define a $\\kappa$-degenerate graph. Next we give an efficient algorithm to determine whether a graph is $\\kappa$-degenerate. We revisit the concept of dynamic monopolies in graphs. The latter notion is used in formulation and analysis of spread of influence such as disease or opinion in social networks. We consider dynamic monopolies with (not necessarily positive) but integral threshold assignments. We obtain a sufficient and necessary relationship between dynamic monopolies and generalized degeneracy. As applications of the previous results we consider the problem of determining the maximum size of $\\kappa$-degenerate (or $k$-degenerate) induced subgraphs in any graph. We obtain some upper and lower bounds for the maximum size of any $\\kappa$-degenerate induced subgraph in general and regular graphs. All of our bounds ar...
Robust stochastic maximum principle: Complete proof and discussions
Poznyak Alex S.
2002-01-01
Full Text Available This paper develops a version of Robust Stochastic Maximum Principle (RSMP applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [14,43-45] as well as on the variation results of [53] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Maximum power analysis of photovoltaic module in Ramadi city
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Maximum power analysis of photovoltaic module in Ramadi city
Majid Shahatha Salim, Jassim Mohammed Najim, Salih Mohammed Salih
2013-01-01
Full Text Available Performance of photovoltaic (PV module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Design and development of high frequency matrix phased-array ultrasonic probes
Na, Jeong K.; Spencer, Roger L.
2012-05-01
High frequency matrix phased-array (MPA) probes have been designed and developed for more accurate and repeatable assessment of weld conditions of thin sheet metals commonly used in the auto industry. Unlike the line focused ultrasonic beam generated by a linear phased-array (LPA) probe, a MPA probe can form a circular shaped focused beam in addition to the typical beam steering capabilities of phased-array probes. A CIVA based modeling and simulation method has been used to design the probes in terms of various probe parameters such as number of elements, element size, overall dimensions, frequency etc. Challenges associated with the thicknesses of thin sheet metals have been resolved by optimizing these probe design parameters. A further improvement made on the design of the MPA probe proved that a three-dimensionally shaped matrix element can provide a better performing probe at a much lower probe manufacturing cost by reducing the total number of elements and lowering the operational frequency. This three dimensional probe naturally matches to the indentation shape of the weld on the thin sheet metals and hence a wider inspection area with the same level of spatial resolution obtained by a twodimensional flat MPA probe operating at a higher frequency. The two aspects, a wider inspection area and a lower probe manufacturing cost, make this three-dimensional MPA sensor more attractive to auto manufacturers demanding a quantitative nondestructive inspection method.
Probing Sagittarius A* accretion with ALMA
Murchikova, Elena
2017-01-01
The submm Hydrogen recombination line technique can be used as a probe of the Galactic Center. We present the results of our H30α observations of ionized gas from within 0.015 pc around SgrA*. The observations were obtained on ALMA in cycle 3. The line was not detected, but we were able to set a limit on the mass of the cool gas (T~ 104 K) at 2 × 10-3 M ⊙. This is the unique probe of gas cooler than T ~106 K traced by X-ray emission. The total amount of gas near SgrA* gives us clues to understanding the accretion rate of SgrA*.
Results from the Wilkinson Microwave Anisotropy Probe
Komatsu, Eiichiro
2014-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) mapped the distribution of temperature and polarization over the entire sky in five microwave frequency bands. These full-sky maps were used to obtain measurements of temperature and polarization anisotropy of the cosmic microwave background with the unprecedented accuracy and precision. The analysis of two-point correlation functions of temperature and polarization data gives determinations of the fundamental cosmological parameters such as the age and composition of the universe, as well as the key parameters describing the physics of inflation, which is further constrained by three-point correlation functions. WMAP observations alone reduced the flat $\\Lambda$ cold dark matter ($\\Lambda$CDM) cosmological model (six) parameter volume by a factor of >68,000 compared with pre-WMAP measurements. The WMAP observations (sometimes in combination with other astrophysical probes) convincingly show the existence of non-baryonic dark matter, the cosmic neutrino backgrou...
Probing the string winding sector
Aldazabal, Gerardo; Mayo, Martín; Nuñez, Carmen
2017-03-01
We probe a slice of the massive winding sector of bosonic string theory from toroidal compactifications of Double Field Theory (DFT). This string subsector corresponds to states containing one left and one right moving oscillators. We perform a generalized Kaluza Klein compactification of DFT on generic 2 n-dimensional toroidal constant backgrounds and show that, up to third order in fluctuations, the theory coincides with the corresponding effective theory of the bosonic string compactified on n-dimensional toroidal constant backgrounds, obtained from three-point amplitudes. The comparison between both theories is facilitated by noticing that generalized diffeomorphisms in DFT allow to fix generalized harmonic gauge conditions that help in identifying the physical degrees of freedom. These conditions manifest as conformal anomaly cancellation requirements on the string theory side. The explicit expression for the gauge invariant effective action containing the physical massless sector (gravity+antisymmetric+gauge+ scalar fields) coupled to towers of generalized Kaluza Klein massive states (corresponding to compact momentum and winding modes) is found. The action acquires a very compact form when written in terms of fields carrying O( n, n) indices, and is explicitly T-duality invariant. The global algebra associated to the generalized Kaluza Klein compactification is discussed.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Synthesis of Photoactivatable Phospholipidic Probes
Qing PENG; Fan Qi QU; Yi XIA; Jie Hua ZHOU; Qiong You WU; Ling PENG
2005-01-01
We synthesized and characterized photoactivatable phospholipidic probes 1-3. These probes have the perfluorinated aryl azide function at the polar head of phospholipid. They are stable in dark and become highly reactive upon photoirradiation. The preliminary results suggest that they are promising tools to study the topology of membrane proteins and protein-lipid interactions using photolabeling approach.
Bak, Christen Kjeldahl
1977-01-01
The current probe described is a low-cost, shunt resistor for monitoring current pulses in e.g., pulsed lasers. Rise time is......The current probe described is a low-cost, shunt resistor for monitoring current pulses in e.g., pulsed lasers. Rise time is...
Numerical Investigation of the Influence of Reynolds Number on Probe Measurements
无
2000-01-01
The influence of Reynolds number (Re) on probe measurements was investigated numerically, including the effects of the pressure holes and their geometry to obtain accurate hole-pressures. The results indicate that Re influences the probe measurements and cannot be neglected for Re larger than 105 and that the influence increases with Mach number (Ma). The calculations show that the pressures in the downwind holes are influenced more by Re than those of the upwind and central holes when the probe is at an angle. Thus, 7-hole probes may be more suitable for measurements at different Re than 5-hole probes.
Miida, Yusuke; Matsuura, Yuji
2013-09-23
An all-optical 3D photoacoustic imaging probe that consists of an optical fiber probe for ultrasound detection and a bundle of hollow optical fibers for excitation of photoacoustic waves was developed. The fiber probe for ultrasound is based on a single-mode optical fiber with a thin polymer film attached to the output end surface that works as a Fabry Perot etalon. The input end of the hollow fiber bundle is aligned so that each fiber in the bundle is sequentially excited. A thin and flexible probe can be obtained because the probe system does not have a scanning mechanism at the distal end.
Monitoring membrane hydration with 2-(dimethylamino)-6-acylnaphtalenes fluorescent probes
Bagatolli, Luis
2015-01-01
, were used to study membrane lateral structure and associated dynamics. Once incorporated into membranes, the (nanosecond) fluorescent decay of these probes is strongly affected by changes in the local polarity and relaxation dynamics of restricted water molecules existing at the membrane....../water interface. For instance, when glycerophospholipid containing membranes undertake a solid ordered (gel) to liquid disordered phase transition the fluorescence emission maximum of these probes shift ~ 50 nm with a significant change in their fluorescence lifetime. Furthermore, the fluorescence parameters...... of LAURDAN and PRODAN are exquisitely sensitive to cholesterol effects, allowing interpretations that correlate changes in membrane packing with membrane hydration. Different membrane model systems as well as innate biological membranes have been studied with this family of probes allowing interesting...
Cobra Probes Containing Replaceable Thermocouples
Jones, John; Redding, Adam
2007-01-01
A modification of the basic design of cobra probes provides for relatively easy replacement of broken thermocouples. Cobra probes are standard tube-type pressure probes that may also contain thermocouples and that are routinely used in wind tunnels and aeronautical hardware. They are so named because in side views, they resemble a cobra poised to attack. Heretofore, there has been no easy way to replace a broken thermocouple in a cobra probe: instead, it has been necessary to break the probe apart and then rebuild it, typically at a cost between $2,000 and $4,000 (2004 prices). The modified design makes it possible to replace the thermocouple, in minimal time and at relatively low cost, by inserting new thermocouple wire in a tube.
Nanobits: customizable scanning probe tips
Kumar, Rajendra; Shaik, Hassan Uddin; Sardan Sukas, Özlem
2009-01-01
silicon processing. Using a microgripper they were detached from an array and fixed to a standard pyramidal AFM probe or alternatively inserted into a tipless cantilever equipped with a narrow slit. The nanobit-enhanced probes were used for imaging of deep trenches, without visible deformation, wear......We present here a proof-of-principle study of scanning probe tips defined by planar nanolithography and integrated with AFM probes using nanomanipulation. The so-called 'nanobits' are 2-4 mu m long and 120-150 nm thin flakes of Si3N4 or SiO2, fabricated by electron beam lithography and standard...... or dislocation of the tips of the nanobit after several scans. This approach allows an unprecedented freedom in adapting the shape and size of scanning probe tips to the surface topology or to the specific application....
Wearable probes for service design
Mullane, Aaron; Laaksolahti, Jarmo Matti; Svanæs, Dag
2014-01-01
by service employees in reflecting on the delivery of a service. In this paper, we present the ‘wearable probe’, a probe concept that captures sensor data without distracting service employees. Data captured by the probe can be used by the service employees to reflect and co-reflect on the service journey......Probes are used as a design method in user-centred design to allow end-users to inform design by collecting data from their lives. Probes are potentially useful in service innovation, but current probing methods require users to interrupt their activity and are consequently not ideal for use......, helping to identify opportunities for service evolution and innovation....
Electrophoresis-mass spectrometry probe
Andresen, Brian D.; Fought, Eric R.
1987-01-01
The invention involves a new technique for the separation of complex mixtures of chemicals, which utilizes a unique interface probe for conventional mass spectrometers which allows the electrophoretically separated compounds to be analyzed in real-time by a mass spectrometer. This new chemical analysis interface, which couples electrophoresis with mass spectrometry, allows complex mixtures to be analyzed very rapidly, with much greater specificity, and with greater sensitivity. The interface or probe provides a means whereby large and/or polar molecules in complex mixtures to be completely characterized. The preferred embodiment of the probe utilizes a double capillary tip which allows the probe tip to be continually wetted by the buffer, which provides for increased heat dissipation, and results in a continually operating interface which is more durable and electronically stable than the illustrated single capillary tip probe interface.
Mobile Probes in Mobile Learning
Ørngreen, Rikke; Blomhøj, Ulla; Duvaa, Uffe
as an agent for acquiring empirical data (as the situation in hitherto mobile probe settings) but was also the technological medium for which data should say something about (mobile learning). Consequently, not only the content of the data but also the ways in which data was delivered and handled, provided......In this paper experiences from using mobile probes in educational design of a mobile learning application is presented. The probing process stems from the cultural probe method, and was influenced by qualitative interview and inquiry approaches. In the project, the mobile phone was not only acting...... a valuable dimension for investigating mobile use. The data was collected at the same time as design activities took place and the collective data was analysed based on user experience goals and cognitive processes from interaction design and mobile learning. The mobile probe increased the knowledge base...
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Fiol, Bartomeu; Torrents, Genis
2014-01-01
We compute the exact vacuum expectation value of circular Wilson loops for Euclidean ${\\cal N}=4$ super Yang-Mills with $G=SO(N),Sp(N)$, in the fundamental and spinor representations. These field theories are dual to type IIB string theory compactified on $AdS_5\\times {\\mathbb R} {\\mathbb P}^5$ plus certain choices of discrete torsion, and we use our results to probe this holographic duality. We first revisit the LLM-type geometries having $AdS_5\\times {\\mathbb R} {\\mathbb P}^5$ as ground state. Our results clarify and refine the identification of these LLM-type geometries as bubbling geometries arising from fermions on a half harmonic oscillator. We furthermore identify the presence of discrete torsion with the one-fermion Wigner distribution becoming negative at the origin of phase space. We then turn to the string world-sheet interpretation of our results and argue that for the quantities considered they imply two features: first, the contribution coming from world-sheets with a single crosscap is closely ...
Steerable Doppler transducer probes
Fidel, H.F.; Greenwood, D.L.
1986-07-22
An ultrasonic diagnostic probe is described which is capable of performing ultrasonic imaging and Doppler measurement consisting of: a hollow case having an acoustic window which passes ultrasonic energy and including chamber means for containing fluid located within the hollow case and adjacent to a portion of the acoustic window; imaging transducer means, located in the hollow case and outside the fluid chamber means, and oriented to direct ultrasonic energy through the acoustic window toward an area which is to be imaged; Doppler transducer means, located in the hollow case within the fluid chamber means, and movably oriented to direct Doppler signals through the acoustic window toward the imaged area; means located within the fluid chamber means and externally controlled for controllably moving the Doppler transducer means to select one of a plurality of axes in the imaged area along which the Doppler signals are to be directed; and means, located external to the fluid chamber means and responsive to the means for moving, for providing an indication signal for identifying the selected axis.
Camp, Jordan
2017-08-01
Transient Astrophysics Probe (TAP), selected by NASA for a funded Concept Study, is a wide-field high-energy transient mission proposed for flight starting in the late 2020s. TAP’s main science goals, called out as Frontier Discovery areas in the 2010 Decadal Survey, are time-domain astrophysics and counterparts of gravitational wave (GW) detections. The mission instruments include unique imaging soft X-ray optics that allow ~500 deg2 FoV in each of four separate modules; a high sensitivity, 1 deg2 FoV soft X-ray telescope based on single crystal silicon optics; a passively cooled, 1 deg2 FoV Infrared telescope with bandpass 0.6-3 micron; and a set of ~8 small NaI gamma-ray detectors. TAP will observe many events per year of X-ray transients related to compact objects, including tidal disruptions of stars, supernova shock breakouts, neutron star bursts and superbursts, and high redshift Gamma-Ray Bursts. Perhaps most exciting is TAP’s capability to observe X-ray and IR counterparts of GWs involving stellar mass black holes detected by LIGO/Virgo, and possibly X-ray counterparts of GWs from supermassive black holes, detected by LISA and Pulsar Timing Arrays.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
Sarmento, Andrea Gondim Leitao
2002-11-01
Evaluation of thyroid uptake by administration of radioactive iodine is a well-defined procedure to assess patient thyroid function. In general, nuclear medicine institutions use gamma cameras coupled to pinhole collimators to perform uptake studies. With the growing use of intraoperative gamma probes in the radioguided surgical techniques, several institutions are purchasing this new and portable equipment, which can technically be also employed to assess patient's thyroid function, permitting further other applications of gamma cameras. The aim of the study was to compare thyroid uptake trails carried out with both gamma camera and intraoperative gamma probe, in order to evaluate the possible use of gamma probe for this purpose. At first a preliminary study of feasibility was carried out using a neck phantom to verify equipment efficiency with known activities of {sup 131} I. Henceforth, work data from 12 patients undergone studies of thyroid uptakes were evaluated, 24 hours after oral administration of 370 kBq of {sup 131} I. The maximum difference observed between the values obtained with both equipment was 60%, which demonstrated the feasibility of the proposed protocol and made clear that gamma probe can be useful for thyroid uptake studies. (author)
Jiang, R. H.; Chou, H. C.; Chu, J. Y.; Chen, C.; Yen, T. J.
2016-09-01
Near-field scanning optical microscopy (NSOM) offers subwavelength optical resolution beyond the diffraction limit, enabling practical applications in optical imaging, sensing and nanolithography. However, due to the sub-100 nm size of apertures, conventional NSOM aperture probes suffer from the constrains of the strong attenuation of the throughput and limited the spatial resolution. To solve the problem, we designed a novel scheme for apertureless plasmonic probes with radial internal illumination. Employing non-periodic multi-rings geometry for plasmonic excitations, surface plasmons adiabatically nanofocuse energy at tip and the full width at half maximum of the optimal design is 18 nm. The proposed probe was optimized with 2D finite-difference time-domain (FDTD) analysis and realistic parabolic probe geometries. Comprehensive electromagnetic simulation shows that the optimal probe feature obeys Fabry-Pérot condition on the plasmonic metallic wall, giving rise to substantial field enhancement up to 6 orders of magnitude greater than conventional aperture probes without degrading its spatial resolution. We fabricated the proposed probe which possesses apex angle ( 22 degree) and tip radius ( 30 nm). Finally, the proposed near field plasmonic probe effectively combining the high resolution of apertureless probes with high throughput can enable the proposed plasmonic NSOM probe as a practical tool for applications in near field optical microscopy.
BINDER DRAINAGE TEST FOR POROUS MIXTURES MADE BY VARYING THE MAXIMUM AGGREGATE SIZES
Hardiman Hardiman
2004-01-01
Full Text Available Binder drainage occurs with mixes of small aggregate surface area particularly porous asphalt. The binder drainage test, developed by the Transport Research Laboratory, UK, is commonly used to set an upper limit on the acceptable binder content for a porous mix. This paper presents the results of a laboratory investigation to determine the effects of different binder types on the binder drainage characteristics of porous mix made of various maximum aggregate sizes 20, 14 and 10 mm. Two types of binder were used, conventional 60/70 pen bitumen, and styrene butadiene styrene (SBS modified bitumen. The amount of binder lost through drainage after three hours at the maximum mixing temperature were measured in duplicate for mixes of different maximum sizes and binder contents. The maximum mixing temperature adopted depends on the types of binder used. The retained binder is plotted against the initial mixed binder content, together with the line of equality where the retained binder equals the mixed binder content. The results indicate the significant contribution of using SBS modified bitumen to increase the target bitumen binder content. Their significance is discussed in terms of target binder content, the critical binder content, the maximum mixed binder content and the maximum retained binder content values obtained from the binder drainage test. It was concluded that increasing maximum aggregate sizes decrease the maximum retained binder content, critical binder content, target binder content, maximum mixed binder content, and mixed content for both binders, but however for all mixtures, SBS is the highest.
Volumetric synthetic aperture imaging with a piezoelectric 2D row-column probe
Bouzari, Hamed; Engholm, Mathias; Christiansen, Thomas Lehrmann; Beers, Christopher; Lei, Anders; Stuart, Matthias Bo; Nikolov, Svetoslav Ivanov; Thomsen, Erik Vilain; Jensen, Jørgen Arendt
2016-04-01
The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addressed transducer array. Utilizing single element transmit events, a volume rate of 90 Hz down to 14 cm deep is achieved. Data are obtained using the experimental ultrasound scanner SARUS with a 70 MHz sampling frequency and beamformed using a delay-and-sum (DAS) approach. A signal-to-noise ratio of up to 32 dB is measured on the beamformed images of a tissue mimicking phantom with attenuation of 0.5 dB cm-1 MHz-1, from the surface of the probe to the penetration depth of 300λ. Measured lateral resolution as Full-Width-at-Half-Maximum (FWHM) is between 4λ and 10λ for 18% to 65% of the penetration depth from the surface of the probe. The averaged contrast is 13 dB for the same range. The imaging performance assessment results may represent a reference guide for possible applications of such an array in different medical fields.
A Miniature Four-Hole Probe for Measurement of Three-Dimensional Flow with Large Gradients
Ravirai Jangir
2014-01-01
Full Text Available A miniature four-hole probe with a sensing area of 1.284 mm2 to minimise the measurement errors due to the large pressure and velocity gradients that occur in highly three-dimensional turbomachinery flows is designed, fabricated, calibrated, and validated. The probe has good spatial resolution in two directions, thus minimising spatial and flow gradient errors. The probe is calibrated in an open jet calibration tunnel at a velocity of 50 m/s in yaw and pitch angles range of ±40 degrees with an interval of 5 degrees. The calibration coefficients are defined, determined, and presented. Sensitivity coefficients are also calculated and presented. A lookup table method is used to determine the four unknown quantities, namely, total and static pressures and flow angles. The maximum absolute errors in yaw and pitch angles are 2.4 and 1.3 deg., respectively. The maximum absolute errors in total, static, and dynamic pressures are 3.4, 3.9, and 4.9% of the dynamic pressures, respectively. Measurements made with this probe, a conventional five-hole probe and a miniature Pitot probe across a calibration section, demonstrated that the errors due to gradient and surface proximity for this probe are considerably reduced compared to the five-hole probe.
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
to the equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...
How does a probe inserted into the discharge influence the plasma structure?
Yordanov, D.; Lishev, St.; Shivarova, A.
2016-05-01
Shielding the bias applied to the probe by the sheath formed around it and determination of parameters of unperturbed plasmas are in the basis of the probe diagnostics. The results from a two-dimensional model of a discharge with a probe inserted in it show that the probe influences the spatial distribution of the plasma parameters in the entire discharge. The increase (although slight) in the electron temperature, due to the increased losses of charged particles on the additional wall in the discharge (mainly the probe holder), leads to redistribution of the plasma density and plasma potential, as shown by the results obtained at the floating potential of the probe. The deviations due to the bias applied to the probe tip are stronger in the ion saturation region of the probe characteristics. The pattern of the spatial redistribution of the plasma parameters advances together with the movement of the probe deeper in the discharge. Although probe sheaths and probe characteristics resulting from the model are shown, the study does not aim at discussions on the theories for determination of the plasma density from the ion saturation current. Regardless of the modifications in the plasma behavior in the entire discharge, the deviations of the plasma parameters at the position of the probe tip and, respectively, the uncertainty which should be added as an error when the accuracy of the probe diagnostics is estimated do not exceed 10%. Consequently, the electron density and temperature obtained, respectively, at the position of the plasma potential on the probe characteristics and from its transition region are in reasonable agreement with the results from the model of the discharge without a probe. Being in the scope of research on a source of negative hydrogen ions with the design of a matrix of small radius inductive discharges, the model is specified for a low-pressure hydrogen discharge sustained in a small-radius tube.
Band excitation method applicable to scanning probe microscopy
Jesse, Stephen; Kalinin, Sergei V
2013-05-28
Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.
Probing black holes in non-perturbative gauge theory
Iizuka, N; Lifschytz, G; Lowe, D A; Iizuka, Norihiro; Kabat, Daniel; Lifschytz, Gilad; Lowe, David A.
2002-01-01
We use a 0-brane to probe a ten-dimensional near-extremal black hole with N units of 0-brane charge. We work directly in the dual strongly-coupled quantum mechanics, using mean-field methods to describe the black hole background non-perturbatively. We obtain the distribution of W boson masses, and find a clear separation between light and heavy degrees of freedom. To localize the probe we introduce a resolving time and integrate out the heavy modes. After a non-trivial change of coordinates, the effective potential for the probe agrees with supergravity expectations. We compute the entropy of the probe, and find that the stretched horizon of the black hole arises dynamically in the quantum mechanics, as thermal restoration of unbroken U(N+1) gauge symmetry. Our analysis of the quantum mechanics predicts a correct relation between the horizon radius and entropy of a black hole.
Probing Local Environments by Time-Resolved Stimulated Emission Spectroscopy
Ana Rei
2012-01-01
Full Text Available Time-resolved stimulated emission spectroscopy was employed to probe the local environment of DASPMI (4-(4-(dimethylaminostyryl-N-methyl-pyridinium iodide in binary solvents of different viscosity and in a sol-gel matrix. DASPMI is one of the molecules of choice to probe local environments, and the dependence of its fluorescence emission decay on viscosity has been previously used for this purpose in biological samples, solid matrices as well as in solution. The results presented in this paper show that time-resolved stimulated emission of DASPMI is a suitable means to probe the viscosity of local environments. Having the advantage of a higher time resolution, stimulated emission can provide information that is complementary to that obtained from fluorescence decay measurements, making it feasible to probe systems with lower viscosity.
Thermal Analysis of Small Re-Entry Probe
Agrawal, Parul; Prabhu, Dinesh K.; Chen, Y. K.
2012-01-01
The Small Probe Reentry Investigation for TPS Engineering (SPRITE) concept was developed at NASA Ames Research Center to facilitate arc-jet testing of a fully instrumented prototype probe at flight scale. Besides demonstrating the feasibility of testing a flight-scale model and the capability of an on-board data acquisition system, another objective for this project was to investigate the capability of simulation tools to predict thermal environments of the probe/test article and its interior. This paper focuses on finite-element thermal analyses of the SPRITE probe during the arcjet tests. Several iterations were performed during the early design phase to provide critical design parameters and guidelines for testing. The thermal effects of ablation and pyrolysis were incorporated into the final higher-fidelity modeling approach by coupling the finite-element analyses with a two-dimensional thermal protection materials response code. Model predictions show good agreement with thermocouple data obtained during the arcjet test.
Magnetic probe array with high sensitivity for fluctuating field.
Kanamaru, Yuki; Gota, Hiroshi; Fujimoto, Kayoko; Ikeyama, Taeko; Asai, Tomohiko; Takahashi, Tsutomu; Nogi, Yasuyuki
2007-03-01
A magnetic probe array is constructed to measure precisely the spatial structure of a small fluctuating field included in a strong confinement field that varies with time. To exclude the effect of the confinement field, the magnetic probes consisting of figure-eight-wound coils are prepared. The spatial structure of the fluctuating field is obtained from a Fourier analysis of the probe signal. It is found that the probe array is more sensitive to the fluctuating field with a high mode number than that with a low mode number. An experimental demonstration of the present method is attempted using a field-reversed configuration plasma, where the fluctuating field with 0.1% of the confinement field is successfully detected.
Gammelgaard, Lauge; Bøggild, Peter; Wells, J.W.
2008-01-01
and a probe pitch of 500 nm. In-air four-point measurements have been performed on indium tin oxide, ruthenium, and titanium-tungsten, showing good agreement with values obtained by other four-point probes. In-vacuum four-point resistance measurements have been performed on clean Bi(111) using different probe...
A subcutaneous Raman needle probe.
Day, John C C; Stone, Nicholas
2013-03-01
Raman spectroscopy is a powerful tool for studying the biochemical composition of tissues and cells in the human body. We describe the initial results of a feasibility study to design and build a miniature, fiber optic probe incorporated into a standard hypodermic needle. This probe is intended for use in optical biopsies of solid tissues to provide valuable information of disease type, such as in the lymphatic system, breast, or prostate, or of such tissue types as muscle, fat, or spinal, when identifying a critical injection site. The optical design and fabrication of this probe is described, and example spectra of various ex vivo samples are shown.
Westphal, R. V.; Lemos, F. R.; Ligrani, P. M.
1989-01-01
Class of improved subminiature hot-wire flow-measuring probes developed. Smaller sizes yield improved resolution in measurements of practical aerodynamic flows. Probe made in one-wire, two-perpendicular-wire, and three-perpendicular-wire version for measurement of one, two, or all three components of flow. Oriented and positioned on micromanipulator stage and viewed under microscope during fabrication. Tested by taking measurements in constant-pressure turbulent boundary layer. New probes give improved measurements of turbulence quantities near surfaces and anisotropies of flows strongly influence relative errors caused by phenomena related to spatial resolution.
Optic probe for semiconductor characterization
Sopori, Bhushan L.; Hambarian, Artak
2008-09-02
Described herein is an optical probe (120) for use in characterizing surface defects in wafers, such as semiconductor wafers. The optical probe (120) detects laser light reflected from the surface (124) of the wafer (106) within various ranges of angles. Characteristics of defects in the surface (124) of the wafer (106) are determined based on the amount of reflected laser light detected in each of the ranges of angles. Additionally, a wafer characterization system (100) is described that includes the described optical probe (120).
Eddy Current Flexible Probes for Complex Geometries
Gilles-Pascaud, C.; Decitre, J. M.; Vacher, F.; Fermon, C.; Pannetier, M.; Cattiaux, G.
2006-03-01
The inspection of materials used in aerospace, nuclear or transport industry is a critical issue for the safety of components exposed to stress or/and corrosion. The industry claims for faster, more sensitive, and more flexible techniques. Technologies based on Eddy Current (EC) flexible array probe and magnetic sensor with high sensitivity such as giant magneto-resistance (GMR) could be a good solution to detect surface-breaking flaws in complex shaped surfaces. The CEA has recently developed, with support from the French Institute for Radiological Protection and Nuclear Safety (IRSN), a flexible array probe based on micro-coils etched on Kapton. The probe's performances have been assessed for the inspection of reactor residual heat removal pipes, and for aeronautical applications within the framework of the European project VERDICT. The experimental results confirm the very good detection of narrow cracks on plane and curve shaped surfaces. This paper also describes the recent progresses concerning the application of GMR sensors to EC testing, and the results obtained for the detection of small surface breaking flaws.
Antenna Near-Field Probe Station Scanner
Zaman, Afroz J. (Inventor); Lee, Richard Q. (Inventor); Darby, William G. (Inventor); Barr, Philip J. (Inventor); Lambert, Kevin M (Inventor); Miranda, Felix A. (Inventor)
2011-01-01
A miniaturized antenna system is characterized non-destructively through the use of a scanner that measures its near-field radiated power performance. When taking measurements, the scanner can be moved linearly along the x, y and z axis, as well as rotationally relative to the antenna. The data obtained from the characterization are processed to determine the far-field properties of the system and to optimize the system. Each antenna is excited using a probe station system while a scanning probe scans the space above the antenna to measure the near field signals. Upon completion of the scan, the near-field patterns are transformed into far-field patterns. Along with taking data, this system also allows for extensive graphing and analysis of both the near-field and far-field data. The details of the probe station as well as the procedures for setting up a test, conducting a test, and analyzing the resulting data are also described.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sulagna Dutta; Krishna Rai Dastidar
2006-12-01
Dependence of amplification without inversion (AWI) on the relative strength of probe and coherent field Rabi frequencies has been studied in H2 and LiH molecules for three-level configuration. We have derived exact analytical expressions for coherences and populations keeping all the orders of probe field Rabi frequency () and coherent field Rabi frequency () in the steady state limit. Previously, first-order approximation (i.e. keeping only the first-order term in ) was used and hence AWI was studied for the condition ≫ . Here, by using the exact analytical expressions of coherences and populations, we have shown that AWI is maximum when is within the same order of probe field Rabi frequency irrespective of the choice of different ro-vibrational transitions in both the molecules. However, the shape of the gain profile and the maximum value of gain on the probe field and the absorption on coherent field depend on the choice of different ro-vibrational levels as the upper lasing levels. Effect of bidirectional pumping, homogeneous and inhomogeneous broadening on AWI process has been studied. By solving the density matrix equations numerically it has been shown that both the transient and the steady state AWI can be obtained and the numerical values of coherences and populations at large time are in very good agreement with exact analytical values in the steady state limit. It has been shown that in molecules AWI can be obtained on probe field of smaller wavelength than that of the coherent field which has not been observed in atoms so far.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Tests of maximum oxygen intake. A critical review.
Shephard, R J
1984-01-01
The determinants of endurance effort vary, depending upon the extent of the muscle mass that is activated. Large muscle work, such as treadmill running, is halted by impending circulatory failure; lack of venous return may compound the basic problem of an excessive cardiac work-load. If the task calls for use of a smaller muscle mass, there is ultimately difficulty in perfusing the active muscles, and glycolysis is halted by an accumulation of acid metabolites. Simple field tests of endurance, such as Cooper's 12-minute run and the Canadian Home Fitness Test, have some value in the rapid screening of large populations, but like other submaximal tests of human performance they lack the precision needed to advise the individual. The directly measured maximum oxygen intake (VO2 max) varies with the type of exercise. The highest values are obtained during uphill treadmill running, but well trained athletes often approach these values during performance of sport-specific tasks. Limitations of methodology and wide interindividual variations of constitutional potential limit the interpretation of maximum oxygen intake data in terms of personal fitness, exercise prescription and the monitoring of training responses. The main practical value of VO2 max measurement is in the functional assessment of patients with cardiorespiratory disease, since changes are then large relative to the precision of the test.
Optimization of agitation and aeration conditions for maximum virginiamycin production.
Shioya, S; Morikawa, M; Kajihara, Y; Shimizu, H
1999-02-01
To maximize the productivity of virginiamycin, which is a commercially important antibiotic as an animal feed additive, an empirical approach was employed in the batch culture of Streptomyces virginiae. Here, the effects of dissolved oxygen (DO) concentration and agitation speed on the maximum cell concentration at the production phase, as well as on the productivity of virginiamycin, were investigated. To maintain the DO concentration in the fermentor at a certain level, either the agitation speed or the inlet oxygen concentration of the supply gas was manipulated. It was found that increasing the agitation speed had a positive effect on the antibiotic productivity independent of the DO concentration. The optimum DO concentration, agitation speed and addition of an autoregulator, virginiae butanolide C (VB-C), were determined to maximize virginiamycin productivity. The optimal strategy was to start the cultivation at 450 rpm and to continue until the DO concentration reached 80%. After reaching 80%, the DO concentration was maintained at this level by changing the agitation speed, up to a maximum of 800 rpm. The addition of an optimal amount of the autoregulator VB-C in an experiment resulted in the maximal production of virginiamycin M (399 mg/l), which was about 1.8-fold those obtained previously.
Maximum power point tracking of partially shaded solar photovoltaic arrays
Roy Chowdhury, Shubhajit; Saha, Hiranmay [IC Design and Fabrication Centre, Department of Electronics and Telecommunication Engineering, Jadavpur University (India)
2010-09-15
The paper presents the simulation and hardware implementation of maximum power point (MPP) tracking of a partially shaded solar photovoltaic (PV) array using a variant of Particle Swarm Optimization known as Adaptive Perceptive Particle Swarm Optimization (APPSO). Under partially shaded conditions, the photovoltaic (PV) array characteristics get more complex with multiple maxima in the power-voltage characteristic. The paper presents an algorithmic technique to accurately track the maximum power point (MPP) of a PV array using an APPSO. The APPSO algorithm has also been validated in the current work. The proposed technique uses only one pair of sensors to control multiple PV arrays. This result in lower cost and higher accuracy of 97.7% compared to earlier obtained accuracy of 96.41% using Particle Swarm Optimization. The proposed tracking technique has been mapped onto a MSP430FG4618 microcontroller for tracking and control purposes. The whole system based on the proposed has been realized on a standard two stage power electronic system configuration. (author)
THE MAXIMUM AND MINIMUM DEGREES OF RANDOM BIPARTITE MULTIGRAPHS
Chen Ailian; Zhang Fuji; Li Hao
2011-01-01
In this paper the authors generalize the classic random bipartite graph model, and define a model of the random bipartite multigraphs as follows: let m=m(n) be a positive integer-valued function on n and (n, m; {pk}) the probability space consisting of all the labeled bipartite multigraphs with two vertex sets A={a1,a2,...,an} and B= {b1, b2,..., bm}, in which the numbers taibj of the edges between any two vertices ai∈A and bj∈B are identically distributed independent random variables with distribution P{taibj}=k}=pk, k=0, 1, 2,..., where pk≥0 and ∑ pk=1. They obtain that Xc,d,A, the number of vertices in A with degree between c and d of Gn,m∈ (n, m;{Pk}) has asymptotically Poisson distribution, and answer the following two questions about the space (n,m; {pk}) with {pk} having geometric distribution, binomial distribution and Poisson distribution, respectively. Under which condition for {Pk} can there be a function D(n) such that almost every random multigraph Gnm∈ (n, m; {pk}) has maximum degree D(n) in A? under which condition for {pk} has almost every multigraph Gn,m∈ (n,m;{pk}) a unique vertex of maximum degree in A?
Ezumi, N., E-mail: ezumi@ec.nagano-nct.ac.jp [Nagano National College of Technology, 716 Tokuma, Nagano 381-8550 (Japan); Todoroki, K. [Nagano National College of Technology, 716 Tokuma, Nagano 381-8550 (Japan); Kobayashi, T. [Nagoya University, Nagoya 464-8603 (Japan); Sawada, K. [Shinshu University, Nagano 380-8553 (Japan); Ohno, N. [Nagoya University, Nagoya 464-8603 (Japan); Kobayashi, M.; Masuzaki, S. [National Institute for Fusion Science, Toki 509-5292 (Japan); Feng, Y. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany)
2011-08-01
Spatial profiles of the plasma flow, electron temperature (T{sub e}) and ion temperature (T{sub i}) in the stochastic magnetic boundary layer of Large Helical Device (LHD) has been studied by simultaneous measurements using a movable multiple functions probe, which consists of Mach probes and an ion sensitive probe. The tendency of the measured spatial profiles of T{sub e} and T{sub i} is similar to the three-dimensional simulation. The results of ion saturation current (I{sub sat}) measurement of the upstream and downstream probes indicate that the plasma flow direction is reversed in the stochastic magnetic boundary layer. I{sub sat} observations obtained deep inside of the boundary layer contradict the simulation result, even though the existence of flow reversal in the LHD stochastic magnetic boundary layer was qualitatively confirmed.
Zacny, K.; Nagihara, S.; Hedlund, M.; Paulsen, G.; Shasho, J.; Mumm, E.; Kumar, N.; Szwarc, T.; Chu, P.; Craft, J.; Taylor, P.; Milam, M.
2013-11-01
In this paper, the development of heat flow probes for measuring the geothermal gradient and conductivity of lunar regolith are presented. These two measurements are the required information for determining the heat flow of a planetary body. Considering the Moon as an example, heat flow properties are very important information for studying the radiogenic isotopes, the thermal evolution and differentiation history, and the mechanical properties of the interior. In order to obtain the best measurements, the sensors must be extended to a depth of at least 3 m, i.e. beyond the depth of significant thermal cycles. Two approaches to heat flow deployment and measurement are discussed in this paper: a percussive approach and a pneumatic approach. The percussive approach utilizes a high frequency hammer to drive a cone penetrometer into the lunar simulant. Ring-like thermal sensors (heaters and temperature sensors) on the penetrometer rod are deployed into the simulant every 30 cm as the penetrometer penetrates to the required 3 m depth. Once the target depth has been achieved, the deployment rod is removed from the simulant, eliminating any thermal path to the lander. The pneumatic approach relies on pressurized gas to excavate, using a cone-shaped nozzle to penetrate the simulant. The nozzle is attached to a coiled stem with thermal sensors embedded along the length of the stem. As the simulant is being lofted out of the hole by the escaping gas, the stem is progressively reeled out from a spool, thus moving the cone deeper into the hole. Thermal conductivity is measured using a needle probe attached to the end of the cone. Breadboard prototypes of these two heat flow probe systems have been constructed and successfully tested under lunar-like conditions to approximately 70 cm, which was the maximum possible depth allowed by the size of the test bin and the chamber.
High precision Hugoniot measurements of D2 near maximum compression
Benage, John; Knudson, Marcus; Desjarlais, Michael
2015-11-01
The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Ultraspecific probes for high throughput HLA typing
Eggers Rick
2009-02-01
Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.
Eriksson, A.I.; Bostroem, R.
1995-04-01
Spherical electrostatic probes are in wide use for the measurements of electric fields and plasma density. This report concentrates on the measurements of fluctuations of these quantities rather than background values. Potential problems with the technique include the influence of density fluctuations on electric field measurements and vice versa, effects of varying satellite potential, and non-linear rectification in the probe and satellite sheaths. To study the actual importance of these and other possible effects, we simulate the response of the probe-satellite system to various wave phenomena in the plasma by applying approximate analytical as well as numerical methods. We use a set of non-linear probe equations, based on probe characteristics experimentally obtained in space, and therefore essentially independent of any specific probe theory. This approach is very useful since the probe theory for magnetized plasmas is incomplete. 47 refs.
Lefebvre, W; Hernandez-Maldonado, D; Moyon, F; Cuvilly, F; Vaudolon, C; Shinde, D; Vurpillot, F
2015-12-01
The geometry of atom probe tomography tips strongly differs from standard scanning transmission electron microscopy foils. Whereas the later are rather flat and thin (atom probe tomography specimens. Based on simulations (electron probe propagation and image simulations), the possibility to apply quantitative high angle annular dark field scanning transmission electron microscopy to of atom probe tomography specimens has been tested. The influence of electron probe convergence and the benefice of deconvolution of electron probe point spread function electron have been established. Atom counting in atom probe tomography specimens is for the first time reported in this present work. It is demonstrated that, based on single projections of high angle annular dark field imaging, significant quantitative information can be used as additional input for refining the data obtained by correlative analysis of the specimen in APT, therefore opening new perspectives in the field of atomic scale tomography.
Vector control structure of an asynchronous motor at maximum torque
Chioncel, C. P.; Tirian, G. O.; Gillich, N.; Raduca, E.
2016-02-01
Vector control methods offer the possibility to gain high performance, being widely used. Certain applications require an optimum control in limit operating conditions, as, at maximum torque, that is not always satisfied. The paper presents how the voltage and the frequency for an asynchronous machine (ASM) operating at variable speed are determinate, with an accent on the method that keeps the rotor flux constant. The simulation analyses consider three load types: variable torque and speed, variable torque and constant speed, constant torque and variable speed. The final values of frequency and voltage are obtained through the proposed control schemes with one controller using the simulation language based on the Maple module. The dynamic analysis of the system is done for the case with P and PI controller and allows conclusions on the proposed method, which can have different applications, as the ASM in wind turbines.
Conjugate variables in continuous maximum-entropy inference.
Davis, Sergio; Gutiérrez, Gonzalo
2012-11-01
For a continuous maximum-entropy distribution (obtained from an arbitrary number of simultaneous constraints), we derive a general relation connecting the Lagrange multipliers and the expectation values of certain particularly constructed functions of the states of the system. From this relation, an estimator for a given Lagrange multiplier can be constructed from derivatives of the corresponding constraining function. These estimators sometimes lead to the determination of the Lagrange multipliers by way of solving a linear system, and, in general, they provide another tool to widen the applicability of Jaynes's formalism. This general relation, especially well suited for computer simulation techniques, also provides some insight into the interpretation of the hypervirial relations known in statistical mechanics and the recently derived microcanonical dynamical temperature. We illustrate the usefulness of these new relations with several applications in statistics.
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Efficiency at maximum power of a chemical engine.
Hooyberghs, Hans; Cleuren, Bart; Salazar, Alberto; Indekeu, Joseph O; Van den Broeck, Christian
2013-10-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power η(mp) [corrected] takes the form 1/2+cΔμ+O(Δμ(2)), with 1∕2 a universal constant and Δμ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in η(mp) [corrected] is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model, we obtain η(mp) = 1/(θ + 1) [corrected], with θ > 0 the power of Δμ in the transport equation.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling
2013-09-01
Full Text Available In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain and obtain the optimal conditions and results. On this basis, we further research the effect of localization of CODP on the total cost and the relation of CODP, inventory policy and demand type through the data simulation. The results of simulation show that CODP locates in the downstream of the product life cycle, is a linear function of the product life cycle. The result indicates that the demand forecast is the main factors influencing the total cost; meanwhile the mode of production according to the demand forecast is the deciding factor of the total cost. Also the model can reflect the relation between the total cost of two-stage supply chain and inventory, demand.
Maximum solid solubility of transition metals in vanadium solvent
ZHANG Jin-long; FANG Shou-shi; ZHOU Zi-qiang; LIN Gen-wen; GE Jian-sheng; FENG Feng
2005-01-01
Maximum solid solubility (Cmax) of different transition metals in metal solvent can be described by a semi-empirical equation using function Zf that contains electronegativity difference, atomic diameter and electron concentration. The relation between Cmax and these parameters of transition metals in vanadium solvent was studied.It is shown that the relation of Cmax and function Zf can be expressed as ln Cmax = Zf = 7. 316 5-2. 780 5 (△X)2 -71. 278δ2 -0. 855 56n2/3. The factor of atomic size parameter has the largest effect on the Cmax of the V binary alloy;followed by the factor of electronegativity difference; the electrons concentration has the smallest effect among the three bond parameters. Function Zf is used for predicting the unknown Cmax of the transition metals in vanadium solvent. The results are compared with Darken-Gurry theorem, which can be deduced by the obtained function Zf in this work.
Maximum Power Point Tracking of Photovoltaic System Using Intelligent Controller
Swathy C.S
2013-04-01
Full Text Available Photovoltaic systems normally use a maximum power point tracking (MPPT technique to continuously give forth the highest probable power to the load when the temperature and solar irradiationchanges occur. This subdues the problem of mismatch between the given load and the solar array. The energy conservation principle is used to obtain small signal model and transfer function. A simulationwork handling with MPPT controller, a DC/DC boost converter feeding a load is achieved. PI controller and fuzzy logic controllers were used as the MPPT controller, which controls the dc/dc converter. Simulations and experimental results showed excellent performance and were used for comparing PI controller and fuzzy logic controller.
Efficiency at maximum power of a chemical engine
Hooyberghs, Hans; Salazar, Alberto; Indekeu, Joseph O; Broeck, Christian Van den
2013-01-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power $\\eta$ takes the form 1/2+c\\Delta \\mu + O(\\Delta \\mu^2), with 1/2 a universal constant and $\\Delta \\mu$ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in $\\eta$ is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model we obtain \\eta = 1/(\\theta +1), with \\theta >0 the power of $\\Delta \\mu$ in the transport equation
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Small Probe Reentry System Project
National Aeronautics and Space Administration — Global Aerospace Corporation (GAC), and its research partner, Cal Poly San Luis Obispo (CPSLO), will develop an integrated Small Probe Reentry System (SPRS) for low...
Lunar Probe Reaches Deep Space
2011-01-01
@@ China's second lunar probe, Chang'e-2, has reached an orbit 1.5 million kilometers from Earth for an additional mission of deep space exploration, the State Administration for Science, Technology and Industry for National Defense announced.
Boolean functions of an odd number of variables with maximum algebraic immunity
LI Na; QI WenFeng
2007-01-01
In this paper, we study Boolean functions of an odd number of variables with maximum algebraic immunity, We identify three classes of such functions, and give some necessary conditions of such functions, which help to examine whether a Boolean function of an odd number of variables has the maximum algebraic immunity. Further, some necessary conditions for such functions to have also higher nonlinearity are proposed, and a class of these functions are also obtained. Finally,we present a sufficient and necessary condition for Boolean functions of an odd number of variables to achieve maximum algebraic immunity and to be also 1-resilient.
DNA probe for lactobacillus delbrueckii
Delley, M.; Mollet, B.; Hottinger, H. (Nestle Research Centre, Lausanne (Switzerland))
1990-06-01
From a genomic DNA library of Lactobacillus delbrueckii subsp. bulgaricus, a clone was isolated which complements a leucine auxotrophy of an Escherichia coli strain (GE891). Subsequent analysis of the clone indicated that it could serve as a specific DNA probe. Dot-blot hybridizations with over 40 different Lactobacillus strains showed that this clone specifically recognized L. delbrueckii subsp. delbrueckii, bulgaricus, and lactis. The sensitivity of the method was tested by using an {alpha}-{sup 32}P-labeled probe.
Utsuzawa, Shin; Mandal, Soumyajit; Song, Yi-Qiao
2012-03-01
In this study, we propose an NMR probe circuit that uses a transformer with a ferromagnetic core for impedance matching. The ferromagnetic core provides a strong but confined coupling that result in efficient energy transfer between the sample coil and NMR spectrometer, while not disturbing the B1 field generated by the sample coil. We built a transformer-coupled NMR probe and found that it offers comparable performance (loss NQR.
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Comparison of magnetic probe calibration at nano and millitesla magnitudes.
Pahl, Ryan A; Rovey, Joshua L; Pommerenke, David J
2014-01-01
mounted probe and 12.0% for the hand-wound probe. The maximum difference between relevant and low magnitude tests was 21.5%.
Speech processing using maximum likelihood continuity mapping
Hogden, John E. (Santa Fe, NM)
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
Hogden, J.E.
2000-04-18
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
MAXIMUM INFORMATION AND OPTIMUM ESTIMATING FUNCTION
林路
2003-01-01
In order to construct estimating functions in some parametric models, this paper introducestwo classes of information matrices. Some necessary and sufficient conditions for the informationmatrices achieving their upper bounds are given. For the problem of estimating the median,some optimum estimating functions based on the information matrices are acquired. Undersome regularity conditions, an approach to carrying out the best basis function is introduced. Innonlinear regression models, an optimum estimating function based on the information matricesis obtained. Some examples are given to illustrate the results. Finally, the concept of optimumestimating function and the methods of constructing optimum estimating function are developedin more general statistical models.
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
Multi-element eddy current probe. For inspecting steam generator tubes
Savin, E.; Sartre, B. [FRAMATOME, 92 - Paris-La-Defense (France); Placko, D.; Premel, D. [Ecole Nationale Superieure de Cachan, 94 (France)
2000-10-01
Framatome and the Ecole Normale Superieure de Cachan are developing a multi-element eddy current probe for inspecting steam generator tubes of 900 MWe PWR reactors. The device is intended to replace much slower rotating probes. Using its measurements, the conductivity image of any point in the tube can be reconstructed, thanks to a numerical, thanks to a numerical model, thus allowing diagnosis. The first trial results on mockups seem already competitive with those obtained using a rotary probe. (authors)
Processing outcomes of the AFM probe-based machining approach with different feed directions
2016-01-01
We present experimental and theoretical results to describe and explain processing outcomes when producing nanochannels that are a few times wider than the atomic force microscope (AFM) probe using an AFM. This is achieved when AFM tip-based machining is performed with reciprocating motion of the tip of the AFM probe. In this case, different feed directions with respect to the orientation of the AFM probe can be used. The machining outputs of interest are the chip formation process, obtained ...
Monitoring Biophysical Properties of Lipid Membranes by Environment-Sensitive Fluorescent Probes
Demchenko, Alexander P.; Mély, Yves; Duportail, Guy; Klymchenko, Andrey S
2009-01-01
We review the main trends in the development of fluorescence probes to obtain information about the structure, dynamics, and interactions in biomembranes. These probes are efficient for studying the microscopic analogs of viscosity, polarity, and hydration, as well as the molecular order, environment relaxation, and electrostatic potentials at the sites of their location. Progress is being made in increasing the information content and spatial resolution of the probe responses. Multichannel e...
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Wang, Hefeng
2014-08-14
We present a quantum algorithm that provides a general approach for obtaining the energy spectrum of a physical system without making a guess on its eigenstates. In this algorithm, a probe qubit is coupled to a quantum register R which consists of one ancilla qubit and an n-qubit register that represents the system. R is prepared in a general reference state, and a general excitation operator that acts on R is constructed. The probe exhibits a dynamical response only when it is resonant with a transition from the reference state to an excited state of R which contains the eigenstates of the system. By varying the probe's frequency, the energy spectrum and the eigenstates of the system can be obtained.
Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer
Bo-Hee Choi
2016-01-01
Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.
High-throughput fiber-array transvaginal ultrasound/photoacoustic probe for ovarian cancer imaging
Salehi, Hassan S.; Kumavor, Patrick D.; Alqasemi, Umar; Li, Hai; Wang, Tianheng; Zhu, Quing
2014-03-01
A high-throughput ultrasound/photoacoustic probe for delivering high contrast and signal-to-noise ratio images was designed, constructed, and tested. The probe consists of a transvaginal ultrasound array integrated with four 1mm-core optical fibers and a sheath. The sheath encases transducer and is lined with highly reflecting aluminum for high intensity light output and uniformity while at the same time remaining below the maximum permissible exposure (MPE) recommended by the American National Standards Institute (ANSI). The probe design was optimized by simulating the light fluence distribution in Zemax. The performance of the probe was evaluated by experimental measurements of the fluence and real-time imaging of polyethylene-tubing filled with blood. These results suggest that our probe has great potential for in vivo imaging and characterization of ovarian cancer.
A New Probe Noise Approach For Acoustic Feedback Cancellation In Hearing Aids
Guo, Meng; Jensen, Søren Holdt; Jensen, Jesper
Acoustic feedback is a big challenge in hearing aids. If not appropriately treated, the feedback limits the maximum possible amplification and may lead to significant sound distortions. In a state-of-the-art hearing aid, an acoustic feedback cancellation (AFC) system is used to compensate...... systems is the biased adaptive filter estimation problem, especially when tonal signals such as music and alarm tones enter the hearing aid microphones. The consequences of this biased estimation might be significant sound distortion or even worse, howling. In principle, unbiased adaptive filter...... estimation can be achieved by adding a probe noise signal to the receiver signal and basing the estimation on the probe noise signal. However, the traditional probe noise approach requires a high-level probe noise signal, which is clearly audible and annoying for the hearing aid user. Hence, this high probe...
Maximum mass of a barotropic spherical star
Fujisawa, Atsuhito; Yoo, Chul-Moon; Nambu, Yasusada
2015-01-01
The ratio of total mass $M$ to surface radius $R$ of spherical perfect fluid ball has an upper bound, $M/R < B$. Buchdahl obtained $B = 4/9$ under the assumptions; non-increasing mass density in outward direction, and barotropic equation of states. Barraco and Hamity decreased the Buchdahl's bound to a lower value $B = 3/8$ $(< 4/9)$ by adding the dominant energy condition to Buchdahl's assumptions. In this paper, we further decrease the Barraco-Hamity's bound to $B \\simeq 0.3636403$ $(< 3/8)$ by adding the subluminal (slower-than-light) condition of sound speed. In our analysis, we solve numerically Tolman-Oppenheimer-Volkoff equations, and the mass-to-radius ratio is maximized by variation of mass, radius and pressure inside the fluid ball as functions of mass density.
Seeking maximum linearity of transfer functions
Silva, Filipi N.; Comin, Cesar H.; Costa, Luciano da F.
2016-12-01
Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function (theoretical or derived from some real system), identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and a simple situation involving experimental data of a low-power, one-stage class A transistor current amplifier. Such an approach, which has been addressed in terms of transfer functions derived from experimentally obtained characteristic surface, also yielded contributions such as the estimation of local constants of the device, as opposed to typically considered average values. The reported method and results pave the way to several further applications in other types of devices and systems, intelligent control operation, and other areas such as identifying regions of power law behavior.
Seeking Maximum Linearity of Transfer Functions
Silva, Filipi N; Costa, Luciano da F
2016-01-01
Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function, identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and real-world (low-power, one-stage class A transistor amplifier) situations. In the former case, the method was found to identity the theoretically optimal region of operation even in presence of noise. In the latter case, it was possible to identify an amplifier circuit configuration providing a good compromise between linearity, amplification and output resistance. The transistor amplifier application, which was addressed in terms of transfer functions derived from its experimentally obtained characteristic surface, also yielded contributions such as the estimation of local cons...
IVVS probe mechanical concept design
Rossi, Paolo, E-mail: paolo.rossi@enea.it; Neri, Carlo; De Collibus, Mario Ferri; Mugnaini, Giampiero; Pollastrone, Fabio; Crescenzi, Fabio
2015-10-15
Highlights: • ENEA designed, developed and tested a laser based In Vessel Viewing System (IVVS). • IVVS mechanical design has been revised from 2011 to 2013 to meet ITER requirements. • Main improvements are piezoceramic actuators and a step focus system. • Successful qualification activities validated the concept design for ITER environment. - Abstract: ENEA has been deeply involved in the design, development and testing of a laser based In Vessel Viewing System (IVVS) required for the inspection of ITER plasma-facing components. The IVVS probe shall be deployed into the vacuum vessel, providing high resolution images and metrology measurements to detect damages and possible erosion. ENEA already designed and manufactured an IVVS probe prototype based on a rad-hard concept and driven by commercial micro-step motors, which demonstrated satisfying viewing and metrology performances at room conditions. The probe sends a laser beam through a reflective rotating prism. By rotating the axes of the prism, the probe can scan all the environment points except those present in a shadow cone and the backscattered light signal is then processed to measure the intensity level (viewing) and the distance from the probe (metrology). During the last years, in order to meet all the ITER environmental conditions, such as high vacuum, gamma radiation lifetime dose up to 5 MGy, cumulative neutron fluence of about 2.3 × 10{sup 17} n/cm{sup 2}, temperature of 120 °C and magnetic field of 8 T, the probe mechanical design was significantly revised introducing a new actuating system based on piezo-ceramic actuators and improved with a new step focus system. The optical and mechanical schemes have been then modified and refined to meet also the geometrical constraints. The paper describes the mechanical concept design solutions adopted in order to fulfill IVVS probe functional performance requirements considering ITER working environment and geometrical constraints.
Improved analysis techniques for cylindrical and spherical double probes
Beal, Brian; Brown, Daniel; Bromaghim, Daron [Air Force Research Laboratory, 1 Ara Rd., Edwards Air Force Base, California 93524 (United States); Johnson, Lee [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, California 91109 (United States); Blakely, Joseph [ERC Inc., 1 Ara Rd., Edwards Air Force Base, California 93524 (United States)
2012-07-15
A versatile double Langmuir probe technique has been developed by incorporating analytical fits to Laframboise's numerical results for ion current collection by biased electrodes of various sizes relative to the local electron Debye length. Application of these fits to the double probe circuit has produced a set of coupled equations that express the potential of each electrode relative to the plasma potential as well as the resulting probe current as a function of applied probe voltage. These equations can be readily solved via standard numerical techniques in order to determine electron temperature and plasma density from probe current and voltage measurements. Because this method self-consistently accounts for the effects of sheath expansion, it can be readily applied to plasmas with a wide range of densities and low ion temperature (T{sub i}/T{sub e} Much-Less-Than 1) without requiring probe dimensions to be asymptotically large or small with respect to the electron Debye length. The presented approach has been successfully applied to experimental measurements obtained in the plume of a low-power Hall thruster, which produced a quasineutral, flowing xenon plasma during operation at 200 W on xenon. The measured plasma densities and electron temperatures were in the range of 1 Multiplication-Sign 10{sup 12}-1 Multiplication-Sign 10{sup 17} m{sup -3} and 0.5-5.0 eV, respectively. The estimated measurement uncertainty is +6%/-34% in density and +/-30% in electron temperature.
Ultrasonic Periodontal Probing Based on the Dynamic Wavelet Fingerprint
Rose S Timothy
2005-01-01
Full Text Available Manual pocket depth probing has been widely used as a retrospective diagnosis method in periodontics. However, numerous studies have questioned its ability to accurately measure the anatomic pocket depth. In this paper, an ultrasonic periodontal probing method is described, which involves using a hollow water-filled probe to focus a narrow beam of ultrasound energy into and out of the periodontal pocket, followed by automatic processing of pulse-echo signals to obtain the periodontal pocket depth. The signal processing algorithm consists of three steps: peak detection/characterization, peak classification, and peak identification. A dynamic wavelet fingerprint (DWFP technique is first applied to detect suspected scatterers in the A-scan signal and generate a two-dimensional black and white pattern to characterize the local transient signal corresponding to each scatterer. These DWFP patterns are then classified by a two-dimensional FFT procedure and mapped to an inclination index curve. The location of the pocket bottom was identified as the third broad peak in the inclination index curve. The algorithm is tested on full-mouth probing data from two sequential visits of 14 patients. Its performance is evaluated by comparing ultrasonic probing results with that of full-mouth manual probing at the same sites, which is taken as the "gold standard."
Characterization of conductive probes for atomic force microscopy
Trenkler, Thomas; Hantschel, Thomas; Vandervorst, Wilfried; Hellemans, Louis; Kulisch, Wilhelm; Oesterschulze, Egbert; Niedermann, Philippe; Sulzbach, T.
1999-03-01
The availability of very sharp, wear-proof, electrically conductive probes is one crucial issue for conductive AFM techniques such as SCM, SSRM and Nanopotentiometry. The purpose of this systematic study is to give an overview of the existing probes and to evaluate their performance for the electrical techniques with emphasis on applications on Si at high contact forces. The suitability of the characterized probes has been demonstrated by applying conductive AFM techniques to test structures and state-of- the-art semiconductor devices. Two classes of probes were examined geometrically and electrically: Si sensors with a conductive coating and integrated pyramidal tips made of metal or diamond. Structural information about the conductive materials was obtained by optical and electron microscopy as well as by AFM roughness measurements. Swift and non-destructive procedures to characterize the geometrical electrical properties of the probes prior to the actual AFm experiment have been developed. A number of analytical tools have been used to explain the observed electrical behavior of the tested probes.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Lensed fiber probes designed as an alternative to bulk probes in optical coherence tomography.
Ryu, Seon Young; Choi, Hae Young; Na, Jihoon; Choi, Woo June; Lee, Byeong Ha
2008-04-01
We demonstrate a compact all-fiber sampling probe for an optical coherence tomography (OCT) system. By forming a focusing lens directly on the tip of an optical fiber, a compact sampling probe could be implemented. To simultaneously achieve a sufficiently long working distance and a good lateral resolution, we employed a large-mode area photonic crystal fiber (PCF) and a coreless silica fiber (CSF) of the same diameters. A working distance of up to 1270 microm, a 3 dB distance range of 2210 microm, and a transverse resolution of 14.2 microm were achieved with the implemented PCF lensed fiber; these values are comparable to those obtainable with a conventional objective lens having an NA of 0.25 (10 x). The performance of the OCT system equipped with the proposed PCF lensed fiber is presented by showing the OCT images of a rat finger as a biological sample and a pearl as an in-depth sample.
Designing a probe beam and an ultraviolet holographic microinterferometer for plasma probing.
Pierce, E L
1980-03-15
The requirements and techniques for time- and space-resolved picosecond probing of laser-produced plasmas are reviewed. The design and limitations of a holographic microinterferometer are discussed, and optical pulse techniques are presented. This technique can provide significant data for understanding the absorption of energy within laser-produced plasmas. The primary requirements are to measure the electron densities in the 10(20)-10(21)-e/cc range, with density contour velocities of 10(6) to 10(7) cm/sec and spatial resolution of 1 microm or better. For these velocities one requires a probe pulse duration of 3-30 psec, an UV wavelength as short as feasible, and large numerical aperture optics corrected for spherical aberration. Interferograms of laser-produced plasmas obtained at 2660 A with a combined resolution of 1 microm and 15 psec are presented.
Multiple-probe scanning probe microscopes for nanoarchitectonic materials science
Nakayama, Tomonobu; Shingaya, Yoshitaka; Aono, Masakazu
2016-11-01
Nanoarchitectonic systems are of interest for utilizing a vast range of nanoscale materials for future applications requiring a huge number of elemental nanocomponents. To explore the science and technology of nanoarchitectonics, advanced characterization tools that can deal with both nanoscale objects and macroscopically extended nanosystems are demanded. Multiple-probe scanning probe microscopes (MP-SPMs) are powerful tools that meet this demand because they take the advantages of conventional scanning probe microscopes and realize atomically precise electrical measurements, which cannot be done with conventional microprobing systems widely used in characterizing materials and devices. Furthermore, an MP-SPM can be used to operate some nanoarchitectonic systems. In this review, we overview the indispensable features of MP-SPMs together with the past, present and future of MP-SPM technology.
An investigation of dust particles orbiting a Langmuir probe
Ramazanov, T S; Kodanova, S K; Dzhumagulova, K N; Dosbolayev, M K; Jumabekov, A N [IETP, Al Farabi Kazakh National University, Tole Bi 96a, 050012 Almaty (Kazakhstan); Petrov, O F; Antipov, S N [Joint Institute for High Temperatures of RAS, 13-2, Izhorskaya St, Moscow 125412 (Russian Federation)
2009-05-29
In the present work, the behavior of dust particles near an attracting Langmuir cylindrical probe in glow discharge plasma was investigated experimentally. Trajectories of dust particles for different initial kinetic energies and impact parameters were analyzed numerically. The comparision between experimental and simulation results are made. The results obtained can be used for the development of new dusty plasma diagnostic techniques.
Methods for making nucleotide probes for sequencing and synthesis
Church, George M; Zhang, Kun; Chou, Joseph
2014-07-08
Compositions and methods for making a plurality of probes for analyzing a plurality of nucleic acid samples are provided. Compositions and methods for analyzing a plurality of nucleic acid samples to obtain sequence information in each nucleic acid sample are also provided.
Maximum likelihood based classification of electron tomographic data.
Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan
2011-01-01
Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
Nix, D.A.; Hogden, J.E.
1998-12-01
The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.
Determining the Tsallis parameter via maximum entropy
Conroy, J. M.; Miller, H. G.
2015-05-01
The nonextensive entropic measure proposed by Tsallis [C. Tsallis, J. Stat. Phys. 52, 479 (1988), 10.1007/BF01016429] introduces a parameter, q , which is not defined but rather must be determined. The value of q is typically determined from a piece of data and then fixed over the range of interest. On the other hand, from a phenomenological viewpoint, there are instances in which q cannot be treated as a constant. We present two distinct approaches for determining q depending on the form of the equations of constraint for the particular system. In the first case the equations of constraint for the operator O ̂ can be written as Tr (FqO ̂)=C , where C may be an explicit function of the distribution function F . We show that in this case one can solve an equivalent maxent problem which yields q as a function of the corresponding Lagrange multiplier. As an illustration the exact solution of the static generalized Fokker-Planck equation (GFPE) is obtained from maxent with the Tsallis enropy. As in the case where C is a constant, if q is treated as a variable within the maxent framework the entropic measure is maximized trivially for all values of q . Therefore q must be determined from existing data. In the second case an additional equation of constraint exists which cannot be brought into the above form. In this case the additional equation of constraint may be used to determine the fixed value of q .
Optical imaging probes in oncology.
Martelli, Cristina; Lo Dico, Alessia; Diceglie, Cecilia; Lucignani, Giovanni; Ottobrini, Luisa
2016-07-26
Cancer is a complex disease, characterized by alteration of different physiological molecular processes and cellular features. Keeping this in mind, the possibility of early identification and detection of specific tumor biomarkers by non-invasive approaches could improve early diagnosis and patient management.Different molecular imaging procedures provide powerful tools for detection and non-invasive characterization of oncological lesions. Clinical studies are mainly based on the use of computed tomography, nuclear-based imaging techniques and magnetic resonance imaging. Preclinical imaging in small animal models entails the use of dedicated instruments, and beyond the already cited imaging techniques, it includes also optical imaging studies. Optical imaging strategies are based on the use of luminescent or fluorescent reporter genes or injectable fluorescent or luminescent probes that provide the possibility to study tumor features even by means of fluorescence and luminescence imaging. Currently, most of these probes are used only in animal models, but the possibility of applying some of them also in the clinics is under evaluation.The importance of tumor imaging, the ease of use of optical imaging instruments, the commercial availability of a wide range of probes as well as the continuous description of newly developed probes, demonstrate the significance of these applications. The aim of this review is providing a complete description of the possible optical imaging procedures available for the non-invasive assessment of tumor features in oncological murine models. In particular, the characteristics of both commercially available and newly developed probes will be outlined and discussed.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Rubanov, G.P.; Grebtsov, E.M.; Kurnosov, V.K.; Tolstoi, G.I.
1988-06-01
Recommendations of 'Instructions on determining pneumoconiosis danger of mine work in coal mines' for choice of a basic point for measuring maximum single concentration of dust along scraper longwalls do not make it possible to objectively evaluate dust load at working places. In the Instructions, a point at 10 to 15 m from exit of longwall on the ventilation drift with an emergent stream of air is proposed for the dust probe. This designated point does not take into account the influence of the scheme of ventilation of the walls on formation of dust currents. Investigations of probes taken at different places along the longwall (at the beginning, 15 m from the begining, 15 m from the end, at the niche and transfer point of the longwal) using direct and reverse flow schemes of ventilation showed that the best point for determining maximum single concentratin of dust is a point in the middle of the longwall where dust currents are the same for both systems of ventilation, and use of a new method of calculating the dust load by testing at many different positions along the scraper longall makes it possible to determine the category of pneumonoconiosis danger for workers at scraper longwalls.
Frequency domain probe design for high frequency sensing of soil moisture
Accurate moisture sensing is an important need for many research programs as well as in control of industrial processes. This paper covers the development of a frequency domain sensing probe for use in obtaining measurements of material properties suitable for work ranging from 0 to 6GHz. The probe ...
Bubble shape and orientation determination with a four-point optical fibre probe
Guet, S.; Luther, S.; Ooms, G.
2003-01-01
We propose a new method to estimate the aspect ratio and orientation of bubbles by using their time series obtained with a four-point optical-fibre probe. The feasibility and accuracy of the method was first analysed by using synthetic bubble–probe interaction data and single bubble experiments in p
Comparative experimental and theoretical investigations of the DM neutron moisture probe
Ølgaard, Povl Lebeck; Haahr, Vagner
1967-01-01
Theoretical and experimental investigations of the Danish produced DM subsurface moisture probe have been carried out at the Research Establishment Risö, and the results obtained are presented in this paper. The DM probe contains an Am-Be fast neutron source and has a glass scintillator containi...
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Galanzha, Ekaterina I.; Weingold, Robert; Nedosekin, Dmitry A.; Sarimollaoglu, Mustafa; Nolan, Jacqueline; Harrington, Walter; Kuchyanov, Alexander S.; Parkhomenko, Roman G.; Watanabe, Fumiya; Nima, Zeid; Biris, Alexandru S.; Plekhanov, Alexander I.; Stockman, Mark I.; Zharov, Vladimir P.
2017-06-01
Understanding cell biology greatly benefits from the development of advanced diagnostic probes. Here we introduce a 22-nm spaser (plasmonic nanolaser) with the ability to serve as a super-bright, water-soluble, biocompatible probe capable of generating stimulated emission directly inside living cells and animal tissues. We have demonstrated a lasing regime associated with the formation of a dynamic vapour nanobubble around the spaser that leads to giant spasing with emission intensity and spectral width >100 times brighter and 30-fold narrower, respectively, than for quantum dots. The absorption losses in the spaser enhance its multifunctionality, allowing for nanobubble-amplified photothermal and photoacoustic imaging and therapy. Furthermore, the silica spaser surface has been covalently functionalized with folic acid for molecular targeting of cancer cells. All these properties make a nanobubble spaser a promising multimodal, super-contrast, ultrafast cellular probe with a single-pulse nanosecond excitation for a variety of in vitro and in vivo biomedical applications.
Sensor probe for rectal manometry
Blechschmidt, R.A.; Hohlfeld, O.; Mueller, R.; Werthschuetzky, R. [Technische Univ. Darmstadt (Germany). Inst. fuer Elektromechanische Konstruktionen
2001-07-01
In this paper a pressure sensor probe is presented that is suitable for assessing dynamic rectal pressure profiles. It consists of ten piezoresistive sensors, mounted on low temperature co-fired ceramics. The sensors are coated with a bio-compatible silicone elastomer. It was possible to reduce the size of the ceramic to 4.5 x 5.5 mm with a height of 1.4 mm. The whole probe has a diameter of 9 mm and a length of 20 cm. One healthy test person underwent rectal manometry. The experimental data and the analysis of linearity, hysteresis, temperature stability, and reproducibility are discussed. The presented sensor probe extends the classical anorectal manometry, particularly in view of quantifying disorders of the rectal motility. (orig.)
Young, Kevin L [Idaho Falls, ID; Hungate, Kevin E [Idaho Falls, ID
2010-02-23
A system for providing operational feedback to a user of a detection probe may include an optical sensor to generate data corresponding to a position of the detection probe with respect to a surface; a microprocessor to receive the data; a software medium having code to process the data with the microprocessor and pre-programmed parameters, and making a comparison of the data to the parameters; and an indicator device to indicate results of the comparison. A method of providing operational feedback to a user of a detection probe may include generating output data with an optical sensor corresponding to the relative position with respect to a surface; processing the output data, including comparing the output data to pre-programmed parameters; and indicating results of the comparison.
Brunetti, Anna Chiara
The design and development of an all-in-fiber probe for Raman spectroscopy are presented in this Thesis. Raman spectroscopy is an optical technique able to probe a sample based on the inelastic scattering of monochromatic light. Due to its high specificity and reliability and to the possibility...... to perform real-time measurements with little or no sample preparation, Raman spectroscopy is now considered an invaluable analytical tool, finding application in several fields including medicine, defense and process control. When combined with fiber optics technology, Raman spectroscopy allows...... for the realization of flexible and minimally-invasive devices, able to reach remote or hardly accessible samples, and to perform in-situ analyses in hazardous environments. The work behind this Thesis focuses on the proof-of-principle demonstration of a truly in-fiber Raman probe, where all parts are realized...
On Global Magnetic ``Monopoly'' Near Solar Cycle Maximums
Kryvodubskyj, V.
During last maximums of the solar activity the both poles of the polar magnetic field had the same polarity. Since in the turbulent α Ω -dynamo model the excitation thresholds of the periodic dipole and quadrupole modes of the poloidal madnetic field (PMF) are rather close [Parker E. N.: 1971, Ap.J. V. 164, p. 491] then it is possible that the quadrupole mode may be excited due to variations of physical parameters in a some regions of the solar convection zone (SCZ). The pattern of the excited modes (dipole, quadrupole, octupole, etc.) is determined by the values of wave number of the Parker's dynamo-wave. We calculated these values for the SCZ model by Stix (1989) [Stix M.: 1989, The Sun. Berlin, p. 200] in the vicinity of solar tachocline (a region of strong shear of angular velocity at the base of the SCZ) with using our estimation of the helical turbulence parameter [Krivodubskij V. N.: 1998, Astron. Reports V. 42, No 1, p. 122] and values of the radial gradient of the angular velocity obtained from the newer helioseismic measurements (during rising phase of 23th solar cycle: 1995-1999) [Howe R.,Christensen-Dalsgaard J., Hill F. et al.: 2000, Science. V. 287, p. 2456]. It is found out that at low latitudes dynamo mechanism produces rather the dipole (wave number ≈ -7), the main antisymmetric, relatively to equatorial plane, mode of the PMF; while at the latitudes higher than 50o the conditions are more favourable for exciting of the quadrupole (wave number ≈ +8), the lowest symmetric mode. Arised north-south magnetic structure asymmetry gives an opportunity to explain the space magnetic anomaly of the PMF (``monopoly'') observed near solar cycle maximums.
Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.
Chor, Benny; Snir, Sagi
2004-12-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.
Baryon stopping probes deconfinement
Wolschin, Georg
2016-08-01
Stopping and baryon transport in central relativistic Pb + Pb and Au + Au collisions are reconsidered with the aim to find indications for the transition from hadronic to partonic processes. At energies reached at the CERN Super Proton Synchrotron ( √{s_{NN}} = 6.3-17.3 GeV) and at RHIC (62.4 GeV) the fragmentation-peak positions as obtained from the data depend linearly on the beam rapidity and are in agreement with earlier results from a QCD-based approach that accounts for gluon saturation. No discontinuities in the net-proton fragmentation peak positions occur in the expected transition region from partons to hadrons at 6-10GeV. In contrast, the mean rapidity loss is predicted to depend linearly on the beam rapidity only at high energies beyond the RHIC scale. The combination of both results offers a clue for the transition from hard partonic to soft hadronic processes in baryon stopping. NICA results could corroborate these findings.
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
Maximum-likelihood estimation of haplotype frequencies in nuclear families.
Becker, Tim; Knapp, Michael
2004-07-01
The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.
Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space
Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf;
2012-01-01
The stretch factor and maximum detour of a graph G embedded in a metric space measure how well G approximates the minimum complete graph containing G and the metric space, respectively. In this paper we show that computing the stretch factor of a rectilinear path in L 1 plane has a lower bound of Ω......(n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ ... compute the stretch factor or maximum detour of trees and cycles in O(σn log d+1 n) time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane. © 2012 World Scientific...
Probe Project Status and Accomplishments
Burris, RD
2001-05-07
The Probe project has completed its first full year of operation. In this document we will describe the status of the project as of December 31, 2000. We will describe the equipment configuration, then give brief descriptions of the various projects undertaken to date. We will mention first those projects performed for outside entities and then those performed for the benefit of one of the Probe sites. We will then describe projects that are under consideration, including some for which initial actions have been taken and others which are somewhat longer-term.
Radioactive Probes on Ferromagnetic Surfaces
2002-01-01
On the (broad) basis of our studies of nonmagnetic radioactive probe atoms on magnetic surfaces and at interfaces, we propose to investigate the magnetic interaction of magnetic probe atoms with their immediate environment, in particular of rare earth (RE) elements positioned on and in ferromagnetic surfaces. The preparation and analysis of the structural properties of such samples will be performed in the UHV chamber HYDRA at the HMI/Berlin. For the investigations of the magnetic properties of RE atoms on surfaces Perturbed Angular Correlation (PAC) measurements and Mössbauer Spectroscopy (MS) in the UHV chamber ASPIC (Apparatus for Surface Physics and Interfaces at CERN) are proposed.
Method for obtaining solid micro -or nanoparticles
Ventosa Rull, Nora; Veciana Miró, Jaume; Cano-Sarabia, Mary; Sala Vergés, Santiago
2008-01-01
[EN] The invention provides a novel method for obtaining solid micro -or nanoparticIes with a homogeneous structure. A method is provided for obtaining solid micro -or nanoparticIes with a homogeneous structure having a particIe size of less than 10 /lm where the processed solid compound has the natural, crystalline, amorphous, polymorphic and other features associated with the starting compound. In accordance with the invention a method which also makes it possible to obtain solid m...
Reif, Roberto; Amorosino, Mark S; Calabro, Katherine W; A'Amar, Ousama; Singh, Satish K; Bigio, Irving J
2008-01-01
Spectral reflectance measurements of biological tissues have been studied for early diagnoses of several pathologies such as cancer. These measurements are often performed with a fiber optic probe in contact with the tissue surface. We report a study in which reflectance measurements are obtained in vivo from mouse thigh muscle while varying the contact pressure of the fiber optic probe. It is determined that the probe pressure is a variable that affects the local optical properties of the tissue. The reflectance spectra are analyzed with an analytical model that extracts the tissue optical properties and facilitates the understanding of underlying physiological changes induced by the probe pressure.
Raman imaging of carious lesions using a hollow optical fiber probe.
Yokoyama, Eriko; Kakino, Satoko; Matsuura, Yuji
2008-08-10
Raman spectroscopy using a hollow optical fiber probe with a glass ball lens at the distal end is proposed for detection of early caries lesions. Raman spectroscopy on carious lesions of extracted teeth showed that the probe enables measurement with a high signal-to-noise ratio when combined with a ball lens with a high refractive index. The proposed probe and lens combination detects changes in Raman spectra caused by morphological differences between sound and carious enamel. We also obtained a high-contrast image of an early carious lesion by scanning the tooth surface with the probe.
Hollow-core photonic crystal fiber-optic probes for Raman spectroscopy.
Konorov, Stanislav O; Addison, Christopher J; Schulze, H Georg; Turner, Robin F B; Blades, Michael W
2006-06-15
We have implemented a new Raman fiber-optic probe design based on a hollow-core photonic-crystal excitation fiber surrounded by silica-core collection fibers. The photonic-crystal fiber offers low attenuation at the pump radiation wavelength, mechanical flexibility, high radiation stability, and low background noise. Because the excitation beam is transmitted through air inside the hollow-core fiber, silica Raman scattering is much reduced, improving the quality of the spectra obtained using probes of this design. Preliminary results show that the new probe design decreases the Raman background from the silica by approximately an order of magnitude compared to solid-core silica Raman probes.
OBTAINING AND PROPERTIES OF AgInS2 FILMS
M. A. Abdullaev
2016-01-01
Full Text Available Aim. The aim is to obtain AgInS2 films and study their electrical and optical properties.Methods. The samples of thin AgInS2 films for measurement were obtained by the method of magnetron sputtering with direct current. The structure, phase and elemental composition were studied using DRON-2 X-ray diffractometer (СuKа - radiation and the microscope LEO-1450 with EDS attachment for X-ray microanalysis. The optical transmittance and absorption were examined using MDR-2 monochromator in the wavelength range of 400-800 nm with the Keitley electrometer and FD-10G; we applied the spectral resolution of ± 1 meV. The electrical conductivity, Hall effect was measured by the four-point probe method with indium ohmic contacts. Measurements were carried out in the temperature range of 77-400 K.Findings. We obtained indium disulfide and silver films with the thickness of up to 1 μm on quartz substrates by magnetron sputtering. It is shown that increasing the substrate temperature to about 450 0С allows to obtain single phase film with a chalcopyrite structure with a band gap of 1.88 eV and high absorption coefficient (>104см-1.Conclusions. The possibility of obtaining films in a wide range of the electrical resistance and variation of the electrical parameters at constant stoichiometry is of interest for efficient technologies of phototransduction.
The modified Langevin description for probes in a nonlinear medium
Krüger, Matthias; Maes, Christian
2017-02-01
When the motion of a probe strongly disturbs the thermal equilibrium of the solvent or bath, the nonlinear response of the latter must enter the probe’s effective evolution equation. We derive that induced stochastic dynamics using second order response around the bath thermal equilibrium. We discuss the nature of the new term in the evolution equation which is no longer purely dissipative, and the appearance of a novel time-scale for the probe related to changes in the dynamical activity of the bath. A major application for the obtained nonlinear generalized Langevin equation is in the study of colloid motion in a visco-elastic medium.
Dosimetric Algorithm to Reproduce Isodose Curves Obtained from a LINAC
Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian
2014-01-01
In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398
Dosimetric algorithm to reproduce isodose curves obtained from a LINAC.
Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian
2014-01-01
In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo.
Dosimetric Algorithm to Reproduce Isodose Curves Obtained from a LINAC
Julio Cesar Estrada Espinosa
2014-01-01
Full Text Available In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel’s size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel’s size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo.
Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA
Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain [CEA, LIST, 91191Gif-sur-Yvette (France); Lonne, Sébastien [EXTENDE, Le Bergson, 15 Avenue Emile Baudot, 91300 MASSY (France)
2014-02-18
The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Hekmati, Arsalan; Hekmati, Rasoul
2016-12-01
Electrical power quality and stability is an important issue nowadays and technology of Superconducting Magnetic Energy Storage systems, SMES, has brought real power storage capability to power systems. Therefore, optimum SMES design to achieve maximum energy with the least length of tape has been quite a matter of concern. This paper provides an approach to design optimization of solenoid and toroid types of SMES, ensuring maximum possible energy storage. The optimization process, based on Genetic Algorithm, calculates the operating current of superconducting tapes through intersection of a load line with the surface indicating the critical current variation versus the parallel and perpendicular components of magnetic flux density. FLUX3D simulations of SMES have been utilized for energy calculations. Through numerical analysis of obtained data, formulations have been obtained for the optimum dimensions of superconductor coil and maximum stored energy for a given length and cross sectional area of superconductor tape.
Maximum at ALS: A powerful tool to investigate open problems in micro and optoelectronics
Lorusso, G.F.; Solak, H.; Singh, S.; Cerrina, F. [Univ. of Wisconsin, Madison, WI (United States). Center of X-ray Lithography; Batson, P.J.; Underwood, J.H. [Lawrence Berkeley National Lab., CA (United States). Center of X-ray Optics
1998-12-31
The authors present recent results obtained by MAXIMUM at the Advanced Light Source (ALS), at the Lawrence Berkeley National Laboratory. MAXIMUM is a scanning photoemission microscope, based on a multilayer coated Schwarzschild objective. An electron energy analyzer collects the emitted photoelectrons to form an image as the sample itself is scanned. The microscope has been purposely designed to take advantage of the high brightness of the third generation synchrotron radiation sources, and its installation at ALS has been recently completed. The spatial resolution of 100 nm and the spectral resolution of 200 meV make the instrument an extremely interesting tool to investigate current problems in opto- and microelectronics. In order to illustrate the potential of MAXIMUM in these fields, the authors report new results obtained by studying the electromigration in Al-Cu lines and the Al segregation in AlGaN thin films.
Systematic measurement of maximum efficiencies and detuning lengths at the JAERI free-electron laser
Nishimori, N; Nagai, R; Minehara, E J
2002-01-01
We made a systematic measurement of efficiency detuning curves at several gain and loss parameters. The absolute detuning length (delta L) of an optical cavity was measured within an accuracy of 0.1 mu m around the maximum efficiency by a pulse-stacking method using an external laser. The FEL gain was controlled by the undulator gap instead of bunch charge, because we can change the gain rapidly while maintaining constant electron bunch conditions. For the high-gain and low-loss regions, the maximum efficiency is obtained at delta L=0 mu m and is larger than the value derived from the theoretical scaling law in the superradiant regime, while for the low-gain region the maximum efficiency is obtained for delta L shorter than 0 mu m and is similar to the scaling law.
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
2008-01-01
Optimal configuration of a class of endoreversible heat engines with fixed duration,input energy and radiative heat transfer law (q∝Δ(T4)) is determined. The optimal cycle that maximizes the efficiency of the heat engine is obtained by using opti-mal-control theory,and the differential equations are solved by the Taylor series expansion. It is shown that the optimal cycle has eight branches including two isothermal branches,four maximum-efficiency branches,and two adiabatic branches. The interval of each branch is obtained,as well as the solutions of the temperatures of the heat reservoirs and the working fluid. A numerical example is given. The obtained results are compared with those obtained with the Newton’s heat transfer law for the maximum efficiency objective,those with linear phe-nomenological heat transfer law for the maximum efficiency objective,and those with radiative heat transfer law for the maximum power output objective.
SONG HanJiang; CHEN LinGen; SUN FengRui
2008-01-01
Optimal configuration of a class of endoreversible heat engines with fixed duration, input energy and radiative heat transfer law (q∝△(T4)) is determined. The optimal cycle that maximizes the efficiency of the heat engine is obtained by using opti-mal-control theory, and the differential equations are solved by the Taylor series expansion. It is shown that the optimal cycle has eight branches including two isothermal branches, four maximum-efficiency branches, and two adiabatic branches. The interval of each branch is obtained, as well as the solutions of the temperatures of the heat reservoirs and the working fluid. A numerical example is given. The obtained results are compared with those obtained with the Newton's heat transfer law for the maximum efficiency objective, those with linear phe-nomenological heat transfer law for the maximum efficiency objective, and those with radiative heat transfer law for the maximum power output objective.
Fabrication of tungsten probe for hard tapping operation in atomic force microscopy
Han, Guebum, E-mail: hanguebum@live.co.kr [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, 5500 Wabash Avenue, Terre Haute, Indiana 47803 (United States); Department of Mechanical Design and Robot Engineering, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul 139-743 (Korea, Republic of); Ahn, Hyo-Sok, E-mail: hsahn@seoultech.ac.kr [Manufacturing Systems and Design Engineering Programme, Seoul National University of Science & Technology, 232 Gongneung-ro, Nowon-gu, Seoul 139-743 (Korea, Republic of)
2016-02-15
We propose a method of producing a tungsten probe with high stiffness for atomic force microscopy (AFM) in order to acquire enhanced phase contrast images and efficiently perform lithography. A tungsten probe with a tip radius between 20 nm and 50 nm was fabricated using electrochemical etching optimized by applying pulse waves at different voltages. The spring constant of the tungsten probe was determined by finite element analysis (FEA), and its applicability as an AFM probe was evaluated by obtaining topography and phase contrast images of a Si wafer sample partly coated with Au. Enhanced hard tapping performance of the tungsten probe compared with a commercial Si probe was confirmed by conducting hard tapping tests at five different oscillation amplitudes on single layer graphene grown by chemical vapor deposition (CVD). To analyze the damaged graphene sample, the test areas were investigated using tip-enhanced Raman spectroscopy (TERS). The test results demonstrate that the tungsten probe with high stiffness was capable of inducing sufficient elastic and plastic deformation to enable obtaining enhanced phase contrast images and performing lithography, respectively. - Highlights: • We propose a method of producing highly stiff tungsten probes for hard tapping AFM. • Spring constant of tungsten probe is determined by finite element method. • Enhanced hard tapping performance is confirmed. • Tip-enhanced Raman spectroscopy is used to identify damage to graphene.
Vogel, Alexander; Scheidt, Holger A; Huster, Daniel
2003-09-01
The distribution of the lipid-attached doxyl electron paramagnetic resonance (EPR) spin label in 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine membranes has been studied by (1)H and (13)C magic angle spinning nuclear magnetic resonance relaxation measurements. The doxyl spin label was covalently attached to the 5th, 10th, and 16th carbons of the sn-2 stearic acid chain of a 1-palmitoyl-2-stearoyl-(5/10/16-doxyl)-sn-glycero-3-phosphocholine analog. Due to the unpaired electron of the spin label, (1)H and (13)C lipid relaxation rates are enhanced by paramagnetic relaxation. For all lipid segments the influence of paramagnetic relaxation is observed even at low probe concentrations. Paramagnetic relaxation rates provide a measure for the interaction strength between lipid segments and the doxyl group. Plotted along the membrane director a transverse distribution profile of the EPR probe is obtained. The chain-attached spin labels are broadly distributed in the membrane with a maximum at the approximate chain position of the probe. Both (1)H and (13)C relaxation measurements show these broad distributions of the doxyl group in the membrane indicating that (1)H spin diffusion does not influence the relaxation measurements. The broad distributions of the EPR label result from the high degree of mobility and structural heterogeneity in liquid-crystalline membranes. Knowing the distribution profiles of the EPR probes, their influence on relaxation behavior of membrane inserted peptide and protein segments can be studied by (13)C magic angle spinning nuclear magnetic resonance. As an example, the location of Ala residues positioned at three sites of the transmembrane WALP-16 peptide was investigated. All three doxyl-labeled phospholipid analogs induce paramagnetic relaxation of the respective Ala site. However, for well ordered secondary structures the strongest relaxation enhancement is observed for that doxyl group in the closest proximity to the respective Ala. Thus
Common path ball lens probe for optical coherence tomography (Conference Presentation)
Singh, Kanwarpal; Yamada, Daisuke; Tearney, Guillermo J.
2016-02-01
Background: Common path probes are highly desirable for optical coherence tomography (OCT) as they reduce system complexity and cost. In this work we report an all-fiber common path side viewing monolithic probe for coronary artery imaging. Methods: Our common path probe was designed for spectrometer based Fourier domain OCT at 1310 nm wavelength. Light from the fiber expands in the coreless fiber region and then focussed by the ball lens. Reflection from ball lens-air interface served as reference signal. The monolithic ball lens probe was assembled within a 560 µmouter diameter drive shaft which was attached to a rotary junction. The drive shaft was placed inside an outer, transparent sheath of 800 µm diameter. Results: With a source input power of 25 mW, we could achieve sensitivity of 100.5 dB. The axial resolution of the system was found to be 15.6 µm in air and the lateral resolution (full width half maximum) was approximately 49 µm. As proof of principal, images of skin acquired using this probe demonstrated clear visualization of the stratum corneum, epidermis, and papillary dermis, along with sweat ducts. Conclusion: In this work we have demonstrated a monolithic, ball lens common, path probe for OCT imaging. The designed ball lens probe is easy to fabricate using a laser splicer. Based on the features and capability of common path probes to provide a simpler solution for OCT, we believe that this development will be an important enhancement for certain types of catheters.
Design of ultrasonic probe and evaluation of ultrasonic waves on E.coli in Sour Cherry Juice
B Hosseinzadeh Samani
2015-09-01
Whatman filter paper using a vacuum pump (Mehmandoost et al., 2011. Afterwards, the samples were poured into a reactor with diameter and height of 80 and 50 mm, respectively. It is necessary to mention that the dimensions of the reactor were optimized during pretests. Probe design: One of the most common types of horns used for ultrasonic machining technologies is step type horn (Naď, 2010. For obtaining the governing equations on deformation along the step type horn in steady state conditions, Eq. (1 was used. In the solution of the mentioned differential equation, the answers are divided into two subsets and each of the answers is obtained considering the boundary conditions (Hosseinzadeh et al., 2013: (1\tc^2.[(∂S/∂x/(S(x.(∂u(x,t/∂x+(∂^2 u(x,t/〖∂x〗^2 ]=(∂^2 u(x,t/〖∂t〗^2 From Eq. (1, it can be concluded that: (2\tu(x,t=(A cos〖ωx/c〗+B sin〖ωx/c(C cos〖ωt+D sinωt 〗 〗 The boundary conditions for Eq. (2 are written as follows: (3\t{■(a (∂u(x/∂x=0,x=0@b (∂u(x/∂x=0,x=l@c u(0=u_in } One of the most important parts in probe design is preventing stress concentration in locations in which the area changes. To avoid this problem, the displacement in this section must be equal to zero (Hosseinzadeh et al., 2013. For obtaining the probe length, the displacement equation and the l1 parameter are used: σ=-E.u_in.ω/c.sin〖(ω.x/c〗 (4 In order to determine the maximum axial stress in step type probe, Eq. (3 and (4 are derived and set equal to zero. Therefore, the maximum stress will be equal to: σ_max=π.E.u_in/l (5 Optimization and Modeling using Response Surface Method: Response surface methodology (RSM has an important application in the design, development and formulation of new products, as well as in the improvement of existing product designs. It defines the effect of the independent variables, alone or in combination, on processes. In addition, to analyzing the effects of the independent variables, this
Electroless nickel plating on optical fiber probe
Li Huang; Zhoufeng Wang; Zhuomin Li; Wenli Deng
2009-01-01
As a component of near-field scanning optical microscope (NSOM),optical fiber probe is an important factor influncing the equipment resolution.Electroless nickel plating is introduced to metallize the optical fiber probe.The optical fibers are etched by 40% HF with Turner etching method.Through pretreatment,the optical fiber probe is coated with Ni-P film by clectrolcss plating in a constant temperature water tank.Atomic absorption spectrometry (AAS),scanning electron microscopy (SEM),and energy dispersive X-ray spectrometry (EDXS) are carried out to charaeterizc the deposition on fiber probe.We have rcproducibly fabricated two kinds of fiber probes with a Ni-P fihn:aperture probe and apertureless probe.In addition,reductive particle transportation on the surface of fiber probe is proposed to explain the cause of these probes.
Mitsuoka, Hiroki; Morita, Shin-ichi; Suzuki, Toshiaki; Matsuura, Yuji; Katsumoto, Yukiteru; Sato, Hidetoshi
2009-02-01
The use of a hollow fiber as a Raman probe, which gives strong advantage of a free link in space, was confirmed to be a versatile and standard analytical method, since Raman data obtained through a hollow fiber probe assures a sufficient link to conventional Raman data. In this paper, we confirmed that a Raman spectrum given by the hollow fiber probe becomes identical to a Raman spectrum measured by a conventional approach, if one is multiplied by an optimized coefficient. In addition, Raman signal intensity changes were related to various types of curved geometries of the probe. The Raman signal intensity value at a curved geometry of the probe, which is one of the most frequently used positions, became 0.35 compared to the value at the standard position of the probe (straight lined).
Information Storage and Retrieval for Probe Storage using Optical Diffraction Patterns
van Honschoten, Joost; Koelmans, Wabe W; Parnell, Thomas P; Zaboronski, Oleg V
2011-01-01
A novel method for fast information retrieval from a probe storage device is considered. It is shown that information can be stored and retrieved using the optical diffraction patterns obtained by the illumination of a large array of cantilevers by a monochromatic light source. In thermo-mechanical probe storage, the information is stored as a sequence of indentations on the polymer medium. To retrieve the information, the array of probes is actuated by applying a bending force to the cantilevers. Probes positioned over indentations experience deflection by the depth of the indentation, probes over the flat media remain un-deflected. Thus the array of actuated probes can be viewed as an irregular optical grating, which creates a data-dependent diffraction pattern when illuminated by laser light. We develop a low complexity modulation scheme, which allows the extraction of information stored in the pattern of indentations on the media from Fourier coefficients of the intensity of the diffraction pattern. We th...
Development of a novel nanoindentation technique by utilizing a dual-probe AFM system.
Cinar, Eyup; Sahin, Ferat; Yablon, Dalia
2015-01-01
A novel instrumentation approach to nanoindentation is described that exhibits improved resolution and depth sensing. The approach is based on a multi-probe scanning probe microscopy (SPM) tool that utilizes tuning-fork based probes for both indentation and depth sensing. Unlike nanoindentation experiments performed with conventional AFM systems using beam-bounce technology, this technique incorporates a second probe system with an ultra-high resolution for depth sensing. The additional second probe measures only the vertical movement of the straight indenter attached to a tuning-fork probe with a high spring constant and it can also be used for AFM scanning to obtain an accurate profiling. Nanoindentation results are demonstrated on silicon, fused silica, and Corning Eagle Glass. The results show that this new approach is viable in terms of accurately characterizing mechanical properties of materials through nanoindentation with high accuracy, and it opens doors to many other exciting applications in the field of nanomechanical characterization.
Development of a novel nanoindentation technique by utilizing a dual-probe AFM system
Eyup Cinar
2015-10-01
Full Text Available A novel instrumentation approach to nanoindentation is described that exhibits improved resolution and depth sensing. The approach is based on a multi-probe scanning probe microscopy (SPM tool that utilizes tuning-fork based probes for both indentation and depth sensing. Unlike nanoindentation experiments performed with conventional AFM systems using beam-bounce technology, this technique incorporates a second probe system with an ultra-high resolution for depth sensing. The additional second probe measures only the vertical movement of the straight indenter attached to a tuning-fork probe with a high spring constant and it can also be used for AFM scanning to obtain an accurate profiling. Nanoindentation results are demonstrated on silicon, fused silica, and Corning Eagle Glass. The results show that this new approach is viable in terms of accurately characterizing mechanical properties of materials through nanoindentation with high accuracy, and it opens doors to many other exciting applications in the field of nanomechanical characterization.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
Direct tests of micro channel plates as the active element of a new shower maximum detector
Ronzhin, A., E-mail: ronzhin@fnal.gov [Fermilab, Batavia, IL 60510 (United States); Los, S.; Ramberg, E. [Fermilab, Batavia, IL 60510 (United States); Apresyan, A.; Xie, S.; Spiropulu, M. [California Institute of Technology, Pasadena, CA (United States); Kim, H. [University of Chicago, Chicago, IL 60637 (United States)
2015-09-21
We continue the study of micro channel plates (MCP) as the active element of a shower maximum (SM) detector. We present below test beam results obtained with MCPs detecting directly secondary particles of an electromagnetic shower. The MCP efficiency to shower particles is close to 100%. The time resolution obtained for this new type of the SM detector is at the level of 40 ps.
Resende Rosangela Maria Simeão; Jank Liana; Valle Cacilda Borges do; Bonato Ana Lídia Variani
2004-01-01
The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten ...
Latella Ivan
2014-01-01
Full Text Available We analyse the process of conversion of near-field thermal radiation into usable work by considering the radiation emitted between two planar sources supporting surface phonon-polaritons. The maximum work flux that can be extracted from the radiation is obtained taking into account that the spectral flux of modes is mainly dominated by these surface modes. The thermodynamic efficiencies are discussed and an upper bound for the first law efficiency is obtained for this process.
Development of a fiber based Raman probe compatible with interventional magnetic resonance imaging
Ashok, Praveen C.; Praveen, Bavishna B.; Rube, Martin; Cox, Benjamin; Melzer, Andreas; Dholakia, Kishan
2014-02-01
Raman spectroscopy has proven to be a powerful tool for discriminating between normal and abnormal tissue types. Fiber based Raman probes have demonstrated its potential for in vivo disease diagnostics. Combining Raman spectroscopy with Magnetic Resonance Imaging (MRI) opens up new avenues for MR guided minimally invasive optical biopsy. Although Raman probes are commercially available, they are not compatible with a MRI environment due to the metallic components which are used to align the micro-optic components such as filters and lenses at the probe head. Additionally they are not mechanically compatible with a typical surgical environment as factors such as sterility and length of the probe are not addressed in those designs. We have developed an MRI compatible fiber Raman probe with a disposable probe head hence maintaining sterility. The probe head was specially designed to avoid any material that would cause MR imaging artefacts. The probe head that goes into patient's body had a diameter biopsy needles and catheters. The probe has been tested in MR environment and has been proven to be capable of obtaining Raman signal while the probe is under real-time MR guidance.