WorldWideScience

Sample records for broadband minimum variance

  1. Linear Minimum variance estimation fusion

    Institute of Scientific and Technical Information of China (English)

    ZHU Yunmin; LI Xianrong; ZHAO Juan

    2004-01-01

    This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.

  2. Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    This paper investigates the application of adaptive beamforming in medical ultrasound imaging. A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...

  3. Broadband Minimum Variance Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2009-01-01

    to the ultrasound data. As the error increases, it is seen that the MV beamformer is not as robust compared with the DS beamformer with boxcar an Harming weights. Nevertheless, it is noted that the DS does not outperform the MV beamformer. For errors of 2% and 4% of the correct value, the FWHM are {0.81, 1.25, 0...

  4. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  5. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  6. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  7. Generalized Minimum Variance Control for MDOF Structures under Earthquake Excitation

    Directory of Open Access Journals (Sweden)

    Lakhdar Guenfaf

    2016-01-01

    Full Text Available Control of a multi-degree-of-freedom structural system under earthquake excitation is investigated in this paper. The control approach based on the Generalized Minimum Variance (GMV algorithm is developed and presented. Our approach is a generalization to multivariable systems of the GMV strategy designed initially for single-input-single-output (SISO systems. Kanai-Tajimi and Clough-Penzien models are used to generate the seismic excitations. Those models are calculated using the specific soil parameters. Simulation tests using a 3DOF structure are performed and show the effectiveness of the control method.

  8. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  9. Medical ultrasound imaging method combining minimum variance beamforming and general coherence factor

    Institute of Scientific and Technical Information of China (English)

    WU Wentao; PU Jie; LU Yi

    2012-01-01

    In medical ultrasound imaging field, in order to obtain high resolution and correct the phase errors induced by the velocity in-homogeneity of the tissue, a high-resolution medical ultrasound imaging method combining minimum variance beamforming and general coherence factor was presented. First, the data from the elements is delayed for focusing; then the multi-channel data is used for minimum variance beamforming; at the same time, the data is transformed from array space to beam space to calculate the general coherence factor; in the end, the general coherence factor is used to weight the results of minimum variance beamforming. The medical images are gotten by the imaging system. Experiments based on point object and anechoic cyst object are used to verify the proposed method. The results show the proposed method in the aspects of resolution, contrast and robustness is better than minimum variance beamforming and conventional beamforming.

  10. SIMULATION STUDY OF GENERALIZED MINIMUM VARIANCE CONTROL FOR AN EXTRACTION TURBINE

    Institute of Scientific and Technical Information of China (English)

    Shi Xiaoping

    2003-01-01

    In an extraction turbine, pressure of the extracted steam and rotate speed of the rotor are two important controlled quantities. The traditional linear state feedback control method is not perfect enough to control the two quantities accurately because of existence of nonlinearity and coupling. A generalized minimum variance control method is studied for an extraction turbine. Firstly, a nonlinear mathematical model of the control system about the two quantities is transformed into a linear system with two white noises. Secondly, a generalized minimum variance control law is applied to the system.A comparative simulation is done. The simulation results indicate that precision and dynamic quality of the regulating system under the new control law are both better than those under the state feedback control law.

  11. NEW RESULTS ABOUT THE RELATIONSHIP BETWEEN OPTIMALLY WEIGHTED LEAST SQUARES ESTIMATE AND LINEAR MINIMUM VARIANCE ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Juan ZHAO; Yunmin ZHU

    2009-01-01

    The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.

  12. Minimum variance system identification with application to digital adaptive flight control

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1975-01-01

    A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.

  13. Designing a robust minimum variance controller using discrete slide mode controller approach.

    Science.gov (United States)

    Alipouri, Yousef; Poshtan, Javad

    2013-03-01

    Designing minimum variance controllers (MVC) for nonlinear systems is confronted with many difficulties. The methods able to identify MIMO nonlinear systems are scarce. Harsh control signals produced by MVC are among other disadvantages of this controller. Besides, MVC is not a robust controller. In this article, the Vector ARX (VARX) model is used for simultaneously modeling the system and disturbance in order to tackle these disadvantages. For ensuring the robustness of the control loop, the discrete slide mode controller design approach is used in designing MVC and generalized MVC (GMVC). The proposed method for controller design is tested on a nonlinear experimental Four-Tank benchmark process and is compared with nonlinear MVCs designed by neural networks. In spite of the simplicity of designing GMVCs for the VARX models with uncertainty, the results show that the proposed method is accurate and implementable.

  14. Testing the Minimum Variance Method for Estimating Large Scale Velocity Moments

    CERN Document Server

    Agarwal, Shankar; Watkins, Richard

    2012-01-01

    The estimation and analysis of large-scale bulk flow moments of peculiar velocity surveys is complicated by non-spherical survey geometry, the non-uniform sampling of the matter velocity field by the survey objects, and the typically large measurement errors of the measured line-of-sight velocities. Previously we have developed an optimal "minimum variance" (MV) weighting scheme for using peculiar velocity data to estimate bulk flow moments for idealized dense and isotropic surveys with Gaussian radial distributions that avoids many of these complications. These moments are designed to be easy to interpret and are comparable between surveys. In this paper, we test the robustness of our MV estimators using numerical simulations. Using MV weights, we estimate the underlying bulk flow moments for DEEP, SFI++ and COMPOSITE mock catalogues extracted from the LasDamas and the Horizon Run numerical simulations and compare these estimates to the true moments calculated directly from the simulation boxes. We show that...

  15. Minimum variance imaging based on correlation analysis of Lamb wave signals.

    Science.gov (United States)

    Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi

    2016-08-01

    In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance.

  16. Early fault detection in automotive ball bearings using the minimum variance cepstrum

    Science.gov (United States)

    Park, Choon-Su; Choi, Young-Chul; Kim, Yang-Hann

    2013-07-01

    Ball bearings in automotive wheels play an important role in a vehicle. They enable an automobile to run and simultaneously support the vehicle. Once faults are generated, even if they are small, they often grow fast even under normal driving condition and cause vibration and noise. Therefore, it is critical to detect faults as early as possible to prevent bearings from generating harsh noise and vibration. How early faults can be detected is associated with how well a detecting method finds the information of early faults from measured signal. Incipient faults are so small that the fault signal is inherently buried by noise. Minimum variance cepstrum (MVC) has been introduced for the observation of periodic impulse signal under noisy environments. We are particularly focusing on the definition of MVC that goes back to the original definition by Bogert et al. in comparison with the recently prevalent definition of cepstral analysis. In this work, the MVC is, therefore, obtained by liftering a logarithmic power spectrum, and the lifter bank is designed by the minimum variance algorithm. Furthermore, it is also shown how efficient the method is for detecting periodic fault signal made by early faults by using automotive ball bearings, with which an automobile is equipped under running conditions. We were able to detect incipient faults in 4 out of 12 normal bearings which passed acceptance test as well as in bearings that were recalled due to noise and vibration. In addition, we compared the results of the proposed method with results obtained using other older well-established early fault detection methods that were chosen from 4 groups of methods which were classified by the domain of observation. The results demonstrated that MVC determined bearing fault periods more clearly than other methods under the given condition.

  17. Thermography based breast cancer detection using texture features and minimum variance quantization

    Science.gov (United States)

    Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar

    2014-01-01

    In this paper, we present a system based on feature extraction techniques and image segmentation techniques for detecting and diagnosing abnormal patterns in breast thermograms. The proposed system consists of three major steps: feature extraction, classification into normal and abnormal pattern and segmentation of abnormal pattern. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 GLCM features are extracted from thermograms. The ability of feature set in differentiating abnormal from normal tissue is investigated using a Support Vector Machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross validation method and Receiver operating characteristic analysis was performed. The verification results show that the proposed algorithm gives the best classification results using K-Nearest Neighbor classifier and a accuracy of 92.5%. Image segmentation techniques can play an important role to segment and extract suspected hot regions of interests in the breast infrared images. Three image segmentation techniques: minimum variance quantization, dilation of image and erosion of image are discussed. The hottest regions of thermal breast images are extracted and compared to the original images. According to the results, the proposed method has potential to extract almost exact shape of tumors. PMID:26417334

  18. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    Directory of Open Access Journals (Sweden)

    Soodabeh Darzi

    Full Text Available An experience oriented-convergence improved gravitational search algorithm (ECGSA based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α, is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  19. Quantitative measurement of speech sound distortions with the aid of minimum variance spectral estimation method for dentistry use.

    Science.gov (United States)

    Bereteu, L; Drăgănescu, G E; Stănescu, D; Sinescu, C

    2011-12-01

    In this paper, we search an adequate quantitative method based on minimum variance spectral analysis in order to reflect the dependence of the speech quality on the correct positioning of the dental prostheses. We also search some quantitative parameters, which reflect the correct position of dental prostheses in a sensitive manner.

  20. Cosmic Flows on 100 Mpc/h Scales: Standardized Minimum Variance Bulk Flow, Shear and Octupole Moments

    CERN Document Server

    Feldman, Hume A; Hudson, Michael J

    2009-01-01

    The low order moments of the large scale peculiar velocity field are sensitive probes of the matter density fluctuations on very large scales. However, peculiar velocity surveys have varying spatial distributions of tracers, and so the moments estimated are hard to model and thus are not directly comparable between surveys. In addition, the sparseness of typical proper distance surveys can lead to aliasing of small scale power into what is meant to be a probe of the largest scales. Here we extend our previous optimization analysis of the bulk flow to include the shear and octupole moments where velocities are weighted to give an optimal estimate of the moments of an idealized survey, with the variance of the difference between the estimate and the actual flow being minimized. These "minimum variance" (MV) estimates can be designed to calculate the moments on a particular scale with minimal sensitivity to small scale power, and thus different surveys can be directly compared. The MV moments were also designed ...

  1. Multi-period fuzzy mean-semi variance portfolio selection problem with transaction cost and minimum transaction lots using genetic algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Barati

    2016-04-01

    Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.

  2. Performance assessment of excitation system based on minimum variance benchmark%基于最小方差基准的励磁系统性能评估

    Institute of Scientific and Technical Information of China (English)

    张虹; 徐滨; 高健; 庞健

    2014-01-01

    Step response test methods are generally used to evaluate synchronous generator excitation system performance, but this method can not be implemented online. A method for evaluating the excitation system performance of the minimum variance control benchmark is proposed. Performance of the system under the action of the minimum variance controller output is considered as the upper bound of performance. The ratio of this output performance and actual output performance of the system is defined as the performance index. To avoid expanding the Diophantine equation, filtering and correlation analysis (FCOR) algorithm is introduced. The analysis results show that this method only requires synchronous generator output voltage data and a priori knowledge of the system dead time. Simulation results show that this method simplifies the calculation process, and evaluates the performance of excitation control system timely and accurately.%对同步发电机励磁系统性能评价一般通过阶跃响应方法,但该方法无法在线进行,为此提出了最小方差控制基准的性能评估方法。对系统设计最小方差控制器并作为系统控制性能上限,与系统实际性能进行比较而得到性能指标,并对该方法进行系统滤波和相关性分析 FCOR(Filtering and Correlation Analysis)算法的改进,避免了 Diophantine 方程的展开运算。分析表明该评估方法只需利用同步发电机输出端电压数据,结合系统时滞d就可以得到励磁系统的性能指标。仿真结果表明该方法简化了计算过程,能够及时准确地在线评估励磁系统的控制性能。

  3. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    Science.gov (United States)

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  4. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  5. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

    Institute of Scientific and Technical Information of China (English)

    WU Mixia; WANG Songgui

    2004-01-01

    For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

  6. Conversations across Meaning Variance

    Science.gov (United States)

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  7. Adoption of Broadband Services

    DEFF Research Database (Denmark)

    Falch, Morten

    2008-01-01

    successful markets for broadband. This is done through analysis of national policies in three European countries-Denmark, Sweden, and Germany-and the U.S., Japan, and South Korea. We concluded that successful implementation of broadband depends on the kind of policy measures to be taken at the national level....... Many countries have provided active support for stimulating diffusion of broadband and national variants of this type of policies in different countries are important for an explanation of national differences in adoption of broadband....

  8. Nominal analysis of "variance".

    Science.gov (United States)

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  9. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    2002-01-01

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  10. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  11. A Weighting Criterion Design of Antenna Arrays for Multipath Mitigation——Down-up-Ratio Constrained Minimum Variance (DCMV) Criterion%一种具有多径抑制能力的阵列加权准则设计——约束下上比的最小方差(DCMV)准则

    Institute of Scientific and Technical Information of China (English)

    李敏; 王飞雪; 李峥嵘; 曾祥华

    2012-01-01

    To mitigate multipath in monitoring (reference) stations of satellite navigation systems, a weighting criterion for antenna arrays called Down-up-ratio Constrained Minimum Variance ( DCMV) criterion is proposed in this paper. The proposed criterion aims at minimizing the array output power under the constraint of down-up-ratio not greater than some threshold r. Therefore, this criterion is able to mitigate both interference and multipath. Simulation results show that it outperformed other criteria in satellite navigation systems, such as Power Inversion, Beam Steering, Maximum Signal-to-Interference-plus-Noise Ratio criterion, etc. The DCMV criterion is able to quantitatively control the incoming multipath energy, however, it losses some array gain as a cost.%针对卫星导航系统监测站(参考站)面临的典型多径环境,设计了一种具有多径抑制能力的阵列加权准则——约束下上比的最小方差( DCMV)准则.该准则的优化目标是在约束有用信号方向的下上比不大于某个门限r的条件下,使阵列输出功率最小.理论推导和仿真结果表明,相比卫星导航领域常见的几种天线阵最优加权准则(如功率倒置、波束控制、最大信干噪比准则等),DCMV准则可以定量控制地面反射多径的入射能量,然而其代价是损失了一定的阵列增益.

  12. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  13. Passive broadband acoustic thermometry

    Science.gov (United States)

    Anosov, A. A.; Belyaev, R. V.; Klin'shov, V. V.; Mansfel'd, A. D.; Subochev, P. V.

    2016-04-01

    The 1D internal (core) temperature profiles for the model object (plasticine) and the human hand are reconstructed using the passive acoustothermometric broadband probing data. Thermal acoustic radiation is detected by a broadband (0.8-3.5 MHz) acoustic radiometer. The temperature distribution is reconstructed using a priori information corresponding to the experimental conditions. The temperature distribution for the heated model object is assumed to be monotonic. For the hand, we assume that the temperature distribution satisfies the heat-conduction equation taking into account the blood flow. The average error of reconstruction determined for plasticine from the results of independent temperature measurements is 0.6 K for a measuring time of 25 s. The reconstructed value of the core temperature of the hand (36°C) generally corresponds to physiological data. The obtained results make it possible to use passive broadband acoustic probing for measuring the core temperatures in medical procedures associated with heating of human organism tissues.

  14. Hedging with stock index futures: downside risk versus the variance

    NARCIS (Netherlands)

    Brouwer, F.; Nat, van der M.

    1995-01-01

    In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis

  15. Rapidly converging multichannel controllers for broadband noise and vibrations

    NARCIS (Netherlands)

    Berkhoff, A.P.

    2010-01-01

    Applications are given of a preconditioned adaptive algorithm for broadband multichannel active noise control. Based on state-space descriptions of the relevant transfer functions, the algorithm uses the inverse of the minimum-phase part of the secondary path in order to improve the speed of converg

  16. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  17. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....

  18. Broadband terahertz spectroscopy

    Institute of Scientific and Technical Information of China (English)

    Wenhui Fan

    2011-01-01

    1.Introduction Spanning the frequency range between the infrared (IR) radiation and microwaves,terahertz (THz) waves are,also known as T-rays,T-lux,or simply called THz,assigned to cover the electromagnetic spectrum typically from 100 GHz (1011 Hz) to 10 THz (1013 Hz),namely,from 3 mm to 30 μm in wavelength,although slightly different definitions have been quoted by different authors.For a very long time,THz region is an almost unexplored field due to its rather unique location in the electromagnetic spectrum.Well-known techniques in optical or microwave region can not be directly employed in the THz range because optical wavelengths are too short and microwave wavelengths are too long compared to THz wavelengths.%An overview of the major techniques to generate and detect THz radiation so far, especially the major approaches to generate and detect coherent ultra-short THz pulses using ultra-short pulsed laser, has been presented. And also, this paper, in particularly, focuses on broadband THz spectroscopy and addresses on a number of issues relevant to generation and detection of broadband pulsed THz radiation as well as broadband time-domain THz spectroscopy (THz-TDS) with the help of ultra-short pulsed laser. The time-domain waveforms of coherent ultra-short THz pulses from photoconductive antenna excited by femtosecond laser with different pulse durations and their corresponding Fourier-transformed spectra have been obtained via the numerical simulation of ultrafast dynamics between femtosecond laser pulse and photoconductive material. The origins of fringes modulated on the top of broadband amplitude spectrum, which is measured by electric-optic detector based on thin nonlinear crystal and extracted by fast Fourier transformation, have been analyzed and the major solutions to get rid of these fringes are discussed.

  19. Analysis of Variance: Variably Complex

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  20. Broadband Radio Service (BRS) and Educational Broadband Service (EBS) Transmitters

    Data.gov (United States)

    Department of Homeland Security — The Broadband Radio Service (BRS), formerly known as the Multipoint Distribution Service (MDS)/Multichannel Multipoint Distribution Service (MMDS), is a commercial...

  1. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  3. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  5. Broadband pendulum energy harvester

    Science.gov (United States)

    Liang, Changwei; Wu, You; Zuo, Lei

    2016-09-01

    A novel electromagnetic pendulum energy harvester with mechanical motion rectifier (MMR) is proposed and investigated in this paper. MMR is a mechanism which rectifies the bidirectional swing motion of the pendulum into unidirectional rotation of the generator by using two one-way clutches in the gear system. In this paper, two prototypes of pendulum energy harvester with MMR and without MMR are designed and fabricated. The dynamic model of the proposed MMR pendulum energy harvester is established by considering the engagement and disengagement of the one way clutches. The simulation results show that the proposed MMR pendulum energy harvester has a larger output power at high frequencies comparing with non-MMR pendulum energy harvester which benefits from the disengagement of one-way clutch during pendulum vibration. Moreover, the proposed MMR pendulum energy harvester is broadband compare with non-MMR pendulum energy harvester, especially when the equivalent inertia is large. An experiment is also conducted to compare the energy harvesting performance of these two prototypes. A flywheel is attached at the end of the generator to make the disengagement more significant. The experiment results also verify that MMR pendulum energy harvester is broadband and has a larger output power at high frequency over the non-MMR pendulum energy harvester.

  6. Broadband terahertz fiber directional coupler

    DEFF Research Database (Denmark)

    Nielsen, Kristian; Rasmussen, Henrik K.; Jepsen, Peter Uhd;

    2010-01-01

    We present the design of a short broadband fiber directional coupler for terahertz (THz) radiation and demonstrate a 3 dB coupler with a bandwidth of 0:6 THz centered at 1:4 THz. The broadband coupling is achieved by mechanically downdoping the cores of a dual-core photonic crystal fiber...

  7. Variance-based uncertainty relations

    CERN Document Server

    Huang, Yichen

    2010-01-01

    It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.

  8. Metasurface Broadband Solar Absorber

    CERN Document Server

    Azad, A K; Sykora, M; Weisse-Bernstein, N R; Luk, T S; Taylor, A J; Dalvit, D A R; Chen, H -T

    2015-01-01

    We demonstrate a broadband, polarization independent, omnidirectional absorber based on a metallic metasurface architecture, which accomplishes greater than 90% absorptance in the visible and near-infrared range of the solar spectrum, and exhibits low emissivity at mid- and far-infrared wavelengths. The complex unit cell of the metasurface solar absorber consists of eight pairs of gold nano-resonators that are separated from a gold ground plane by a thin silicon dioxide spacer. Our experimental measurements reveal high-performance absorption over a wide range of incidence angles for both s- and p-polarizations. We also investigate numerically the frequency-dependent field and current distributions to elucidate how the absorption occurs within the metasurface structure. Furthermore, we discuss the potential use of our metasurface absorber design in solar thermophotovoltaics by exploiting refractory plasmonic materials.

  9. Neutrino mass without cosmic variance

    CERN Document Server

    LoVerde, Marilena

    2016-01-01

    Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...

  10. Warped functional analysis of variance.

    Science.gov (United States)

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  11. Variance optimal stopping for geometric Levy processes

    DEFF Research Database (Denmark)

    Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund

    2015-01-01

    The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...

  12. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  13. Broadband Advanced Spectral System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NovaSol proposes to develop an advanced hyperspectral imaging system for earth science missions named BRASS (Broadband Advanced Spectral System). BRASS combines...

  14. Broadband transmission EPR spectroscopy.

    Directory of Open Access Journals (Sweden)

    Wilfred R Hagen

    Full Text Available EPR spectroscopy employs a resonator operating at a single microwave frequency and phase-sensitive detection using modulation of the magnetic field. The X-band spectrometer is the general standard with a frequency in the 9-10 GHz range. Most (biomolecular EPR spectra are determined by a combination of the frequency-dependent electronic Zeeman interaction and a number of frequency-independent interactions, notably, electron spin - nuclear spin interactions and electron spin - electron spin interactions, and unambiguous analysis requires data collection at different frequencies. Extant and long-standing practice is to use a different spectrometer for each frequency. We explore the alternative of replacing the narrow-band source plus single-mode resonator with a continuously tunable microwave source plus a non-resonant coaxial transmission cell in an unmodulated external field. Our source is an arbitrary wave digital signal generator producing an amplitude-modulated sinusoidal microwave in combination with a broadband amplifier for 0.8-2.7 GHz. Theory is developed for coaxial transmission with EPR detection as a function of cell dimensions and materials. We explore examples of a doublet system, a high-spin system, and an integer-spin system. Long, straigth, helical, and helico-toroidal cells are developed and tested with dilute aqueous solutions of spin label hydroxy-tempo. A detection limit of circa 5 µM HO-tempo in water at 800 MHz is obtained for the present setup, and possibilities for future improvement are discussed.

  15. Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    Science.gov (United States)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  16. Broadband Rotational Spectroscopy

    Science.gov (United States)

    Pate, Brooks

    2014-06-01

    The past decade has seen several major technology advances in electronics operating at microwave frequencies making it possible to develop a new generation of spectrometers for molecular rotational spectroscopy. High-speed digital electronics, both arbitrary waveform generators and digitizers, continue on a Moore's Law-like development cycle that started around 1993 with device bandwidth doubling about every 36 months. These enabling technologies were the key to designing chirped-pulse Fourier transform microwave (CP-FTMW) spectrometers which offer significant sensitivity enhancements for broadband spectrum acquisition in molecular rotational spectroscopy. A special feature of the chirped-pulse spectrometer design is that it is easily implemented at low frequency (below 8 GHz) where Balle-Flygare type spectrometers with Fabry-Perot cavity designs become technologically challenging due to the mirror size requirements. The capabilities of CP-FTMW spectrometers for studies of molecular structure will be illustrated by the collaborative research effort we have been a part of to determine the structures of water clusters - a project which has identified clusters up to the pentadecamer. A second technology trend that impacts molecular rotational spectroscopy is the development of high power, solid state sources in the mm-wave/THz regions. Results from the field of mm-wave chirped-pulse Fourier transform spectroscopy will be described with an emphasis on new problems in chemical dynamics and analytical chemistry that these methods can tackle. The third (and potentially most important) technological trend is the reduction of microwave components to chip level using monolithic microwave integrated circuits (MMIC) - a technology driven by an enormous mass market in communications. Some recent advances in rotational spectrometer designs that incorporate low-cost components will be highlighted. The challenge to the high-resolution spectroscopy community - as posed by Frank De

  17. Driving demand for broadband networks and services

    CERN Document Server

    Katz, Raul L

    2014-01-01

    This book examines the reasons why various groups around the world choose not to adopt broadband services and evaluates strategies to stimulate the demand that will lead to increased broadband use. It introduces readers to the benefits of higher adoption rates while examining the progress that developed and emerging countries have made in stimulating broadband demand. By relying on concepts such as a supply and demand gap, broadband price elasticity, and demand promotion, this book explains differences between the fixed and mobile broadband demand gap, introducing the notions of substitution and complementarity between both platforms. Building on these concepts, ‘Driving Demand for Broadband Networks and Services’ offers a set of best practices and recommendations aimed at promoting broadband demand.  The broadband demand gap is defined as individuals and households that could buy a broadband subscription because they live in areas served by telecommunications carriers but do not do so because of either ...

  18. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  19. Performing broadband optical transmission links by appropriate spectral combination of broadband SOA gain, Raman amplification and transmission fiber losses

    Science.gov (United States)

    Motaweh, T.; Sharaiha, A.; Ghisa, L.; Morel, P.; Guégan, M.; Brenot, R.; Verdier, A.

    2017-02-01

    We present the principle of a broadband optical transmission link based on the appropriate combination of the spectral profiles of broadband SOA gain, Raman amplification and transmission fiber losses. We show that, thanks to this principle, a bandwidth as wide as 89 nm (defined at -1 dB) over 75.5 km can be obtained. This bandwidth remains better than 80 nm over a wide range of optical input powers and broadband SOA bias currents, by optimizing the Raman pump. We also show theoretically that the bandwidth of our link is nearly constant for a fiber length from 25 to 100 km optimizing the SOA current. Our broadband transmission link, extended by 24.5 km of fiber, is then validated by achieving the transmission of five CWDM channels modulated at 10 Gbit/s. All five channels were transmitted over 100 km with a minimum received power sensibility of about -15.5 dBm for a reference BER of 10-3.

  20. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  1. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J.S. [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  2. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances.

  3. Adaptive multichannel control of time-varying broadband noise and vibrations

    NARCIS (Netherlands)

    Berkhoff, A.P.

    2010-01-01

    This paper presents results obtained from a number of applications in which a recent adaptive algorithm for broadband multichannel active noise control is used. The core of the algorithm uses the inverse of the minimum-phase part of the secondary path for improvement of the speed of convergence. A f

  4. A Time and Space-based Dynamic IP Routing in Broadband Satellite Networks

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The topology architecture, characteristics and routing technologies of broadband satellite networks are studied in this paper. The authors propose the routing scheme of satellite networks and design a time and space-based distributed routing algorithm whose complexity is O(1). Simulation results aiming at satellite mobility show that the new algorithm can determine the minimum propagation delay paths effectively.

  5. Analysis of variance for model output

    NARCIS (Netherlands)

    Jansen, M.J.W.

    1999-01-01

    A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

  6. The Correct Kriging Variance Estimated by Bootstrapping

    NARCIS (Netherlands)

    den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.

    2004-01-01

    The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi

  7. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...

  8. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  9. 21 CFR 1010.4 - Variances.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...

  10. Achieving universal access to broadband

    DEFF Research Database (Denmark)

    Falch, Morten; Henten, Anders

    2009-01-01

    The paper discusses appropriate policy measures for achieving universal access to broadband services in Europe. Access can be delivered by means of many different technology solutions described in the paper. This means a greater degree of competition and affects the kind of policy measures...

  11. Minimum Length - Maximum Velocity

    CERN Document Server

    Panes, Boris

    2011-01-01

    We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.

  12. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  13. Reduced K-best sphere decoding algorithm based on minimum route distance and noise variance

    Institute of Scientific and Technical Information of China (English)

    Xinyu Mao; Jianjun Wu; Haige Xiang

    2014-01-01

    This paper focuses on reducing the complexity of K-best sphere decoding (SD) algorithm for the detection of uncoded multi-ple input multiple output (MIMO) systems. The proposed algorithm utilizes the threshold-pruning method to cut nodes with partial Euclidean distances (PEDs) larger than the threshold. Both the known noise value and the unknown noise value are considered to generate the threshold, which is the sum of the two values. The known noise value is the smal est PED of signals in the detected layers. The unknown noise value is generated by the noise power, the quality of service (QoS) and the signal-to-noise ratio (SNR) bound. Simulation results show that by considering both two noise values, the proposed algorithm makes an efficient reduction while the performance drops little.

  14. Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems

    CERN Document Server

    Correia, Carlos M; Veran, Jean-Pierre; Andersen, David; Lardiere, Olivier; Bradley, Colin

    2015-01-01

    Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field observation mode to enlarge the adaptive-optics-corrected field in a few specific locations over tens of arc-minutes. The work-scope provided by open-loop tomography and pupil conjugation is amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation aiming to provide enhanced correction across the field with improved performance over static reconstruction methods and less stringent computational complexity scaling laws. Starting from our previous work [1], we use stochastic time-progression models coupled to approximate sparse measurement operators to outline a suitable SA-LQG formulation capable of delivering near optimal correction. Under the spatio-angular framework the wave-fronts are never explicitly estimated in the volume,providing considerable computational savings on 10m-class telescopes and beyond. We find that for Raven, a 10m-class MOAO system with two science channels, the SA-LQG improves the limiting mag...

  15. A minimum variance benchmark to measure the performance of pension funds in Mexico

    Directory of Open Access Journals (Sweden)

    Oscar V. De la Torre Torres

    2015-01-01

    Full Text Available En el presente artículo proponemos el portafolio de mínima varianza como método de ponderación para un benchmark que mida el desempeno˜ de fondos de pensiones en México. Se contrastó éste portafolio contra los logrados ya sea con la máxima razón de Sharpe o el resultante de una combinación lineal de ambos métodos. Esto se hizo con tres simulaciones de eventos discretos con datos diarios de enero de 2002 a mayo de 2013. Con la razón de Sharpe, la prueba de significancia de la Alfa de Jensen y la prueba de expansión de Huberman y Kandel (1987, se encontró que los portafolios simulados tienen una performance similar. Al utilizar los criterios exposición al riesgo, representatividad de los mercados objeto de inversiín y el nivel de rebalanceo propuestos por Bailey (1992, encontramos que el método de mínima varianza es preferible para medir el desempeño de fondos de pensiones en México.

  16. A phantom study on temporal and subband Minimum Variance adaptive beamforming

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.

    2014-01-01

    BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...

  17. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...

  18. 13 CFR 307.22 - Variances.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....

  19. Reducing variance in batch partitioning measurements

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  20. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  1. Discrimination of frequency variance for tonal sequencesa)

    OpenAIRE

    Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.

    2014-01-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...

  2. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  3. Integrated Broadband Quantum Cascade Laser

    Science.gov (United States)

    Mansour, Kamjou (Inventor); Soibel, Alexander (Inventor)

    2016-01-01

    A broadband, integrated quantum cascade laser is disclosed, comprising ridge waveguide quantum cascade lasers formed by applying standard semiconductor process techniques to a monolithic structure of alternating layers of claddings and active region layers. The resulting ridge waveguide quantum cascade lasers may be individually controlled by independent voltage potentials, resulting in control of the overall spectrum of the integrated quantum cascade laser source. Other embodiments are described and claimed.

  4. Discrimination of frequency variance for tonal sequences.

    Science.gov (United States)

    Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A

    2014-12-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) >  σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.

  5. Broadband source of polarization entangled photons.

    Science.gov (United States)

    Fraine, A; Minaeva, O; Simon, D S; Egorov, R; Sergienko, A V

    2012-06-01

    A broadband source of polarization entangled photons based on type-II spontaneous parametric down conversion from a chirped PPKTP crystal is presented. With numerical simulation and experimental evaluation, we report a source of broadband polarization entangled states with a bandwidth of approximately 125 nm for use in quantum interferometry. The technique has the potential to become a basis for the development of flexible broadband sources with designed spectral properties.

  6. Charles Ferguson and the "Broadband Problem"

    OpenAIRE

    2004-01-01

    Charles Ferguson has published a book that advocates a major increase in government intervention in the U.S. market for high-speed, "broadband" Internet services. His proposals are based on a faulty understanding of the effects of current telecommunications regulation and unsubstantiated claims that current participants in the broadband marketplace are exercising monopoly power. His policy recommendations would not only fail to accelerate the pace of broadband diffusion in the United States, ...

  7. Principles of broadband switching and networking

    CERN Document Server

    Liew, Soung C

    2010-01-01

    An authoritative introduction to the roles of switching and transmission in broadband integrated services networks Principles of Broadband Switching and Networking explains the design and analysis of switch architectures suitable for broadband integrated services networks, emphasizing packet-switched interconnection networks with distributed routing algorithms. The text examines the mathematical properties of these networks, rather than specific implementation technologies. Although the pedagogical explanations in this book are in the context of switches, many of the fundamenta

  8. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  9. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  10. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  11. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  12. Achieving Universal Access to Broadband

    Directory of Open Access Journals (Sweden)

    Morten FALCH

    2009-01-01

    Full Text Available The paper discusses appropriate policy measures for achieving universal access to broadband services in Europe. Access can be delivered by means of many different technology solutions described in the paper. This means a greater degree of competition and affects the kind of policy measures to be applied. The paper concludes that other policy measure than the classical universal service obligation are in play, and discusses various policy measures taking the Lisbon process as a point of departure. Available policy measures listed in the paper include, universal service obligation, harmonization, demand stimulation, public support for extending the infrastructure, public private partnerships (PPP, and others.

  13. Understanding broadband over power line

    CERN Document Server

    Held, Gilbert

    2006-01-01

    Understanding Broadband over Power Line explores all aspects of the emerging technology that enables electric utilities to provide support for high-speed data communications via their power infrastructure. This book examines the two methods used to connect consumers and businesses to the Internet through the utility infrastructure: the existing electrical wiring of a home or office; and a wireless local area network (WLAN) access point.Written in a practical style that can be understood by network engineers and non-technologists alike, this volume offers tutorials on electric utility infrastru

  14. Minimum Entropy Orientations

    CERN Document Server

    Cardinal, Jean; Joret, Gwenaël

    2008-01-01

    We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).

  15. DILEMATIKA PENETAPAN UPAH MINIMUM

    Directory of Open Access Journals (Sweden)

    . Pitaya

    2015-02-01

    Full Text Available In the effort of creating appropiate wage for employees, it is necessary to determine the wages by considering the increase of poverty without ignoring the increase of productivity, the progressivity of companies and the growth of economic. The new minimum wages in the provincial level and the regoinal/municipality level have been implemented per 1st January in Indonesia since 2001. The determination of minimum wage for provinvial level should be done 30 days before 1st January, whereas the determination of minimumwage for regional/municipality level should be done 40 days before 1st January. Moreover,there is an article which governs thet the minimumwage will be revised annually. By considering the time of determination and the time of revision above,it can be predicted that before and after the determination date will be crucial time. This is because the controversy among parties in industrial relationships will arise. The determination of minimum wage will always become a dilemmatic step which has to be done by the Government. Through this policy, on one side the government attempts to attract many investors, however, on the other side the government also has to protect the employees in order to have the appropiate wage in accordance with the standard of living.

  16. Broadband Wireline Provider Service Summary; BBRI_wirelineSum12

    Data.gov (United States)

    University of Rhode Island Geospatial Extension Program — This dataset represents the availability of broadband Internet access in Rhode Island via all wireline technologies assessed by Broadband Rhode Island. Broadband...

  17. Minimum quality standards and exports

    OpenAIRE

    2015-01-01

    This paper studies the interaction of a minimum quality standard and exports in a vertical product differentiation model when firms sell global products. If ex ante quality of foreign firms is lower (higher) than the quality of exporting firms, a mild minimum quality standard in the home market hinders (supports) exports. The minimum quality standard increases quality in both markets. A welfare maximizing minimum quality standard is always lower under trade than under autarky. A minimum quali...

  18. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr

  19. Managing product inherent variance during treatment

    NARCIS (Netherlands)

    Verdenius, F.

    1996-01-01

    The natural variance of agricultural product parameters complicates recipe planning for product treatment, i.e. the process of transforming a product batch from its initial state to a prespecified final state. For a specific product P, recipes are currently composed by human experts on the basis of

  20. The Variance of Language in Different Contexts

    Institute of Scientific and Technical Information of China (English)

    申一宁

    2012-01-01

    language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

  1. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  2. 40 CFR 142.43 - Disposition of a variance request.

    Science.gov (United States)

    2010-07-01

    ... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...

  3. 75 FR 10464 - Broadband Technology Opportunities Program

    Science.gov (United States)

    2010-03-08

    ... National Telecommunications and Information Administration RIN 0660-ZA28 Broadband Technology Opportunities... Technology Opportunities Program (BTOP) is extended until 5:00 p.m. Eastern Daylight Time (EDT) on March 26... Sustainable Broadband Adoption (SBA) projects. DATES: All applications for funding CCI projects must...

  4. Broadband Helps Bridge the Achievement Gap

    Science.gov (United States)

    Simmons, Jamal

    2013-01-01

    In education, technology is giving new meaning to the phrase "equal opportunity." Teachers and students in schools across America--urban, rural, wealthy, and impoverished--are gaining access to online learning and all of its benefits through broadband technology. What is broadband? According to the Federal Communications Commission (FCC), it is…

  5. An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction

    CERN Document Server

    Adewunmi, Adrian; Byrne, Mike

    2008-01-01

    This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.

  6. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  7. The GEOSCOPE broadband seismic observatory

    Science.gov (United States)

    Douet, Vincent; Vallée, Martin; Zigone, Dimitri; Bonaimé, Sébastien; Stutzmann, Eléonore; Maggi, Alessia; Pardo, Constanza; Bernard, Armelle; Leroy, Nicolas; Pesqueira, Frédéric; Lévêque, Jean-Jacques; Thoré, Jean-Yves; Bes de Berc, Maxime; Sayadi, Jihane

    2016-04-01

    The GEOSCOPE observatory has provided continuous broadband data to the scientific community for the past 34 years. The 31 operational GEOSCOPE stations are installed in 17 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1, T240 or STS2) and 24 or 26 bit digitizers (Q330HR). Seismometers are installed with warpless base plates, which decrease long period noise on horizontal components by up to 15dB. All stations send data in real time to the IPGP data center, which transmits them automatically to other data centers (FDSN/IRIS-DMC and RESIF) and tsunami warning centers. In 2016, three stations are expected to be installed or re-installed: in Western China (WUS station), in Saint Pierre and Miquelon Island (off the East coast of Canada) and in Walis and Futuna (SouthWest Pacific Ocean). The waveform data are technically validated by IPGP (25 stations) or EOST (6 stations) in order to check their continuity and integrity. Scientific data validation is also performed by analyzing seismic noise level of the continuous data and by comparing real and synthetic earthquake waveforms (body waves). After these validations, data are archived by the IPGP data center in Paris. They are made available to the international scientific community through different interfaces (see details on http://geoscope.ipgp.fr). Data are duplicated at the FDSN/IRIS-DMC data center and a similar duplication at the French national data center RESIF will be operational in 2016. The GEOSCOPE broadband seismic observatory also provides near-real time information on global moderate-to-large seismicity (above magnitude 5.5-6) through the automated application of the SCARDEC method (Vallée et al., 2011). By using global data from the FDSN - in particular from GEOSCOPE and IRIS/USGS stations -, earthquake source parameters (depth, moment magnitude, focal mechanism, source time function) are determined about 45

  8. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  9. Minimum fuel mode evaluation

    Science.gov (United States)

    Orme, John S.; Nobbs, Steven G.

    1995-01-01

    The minimum fuel mode of the NASA F-15 research aircraft is designed to minimize fuel flow while maintaining constant net propulsive force (FNP), effectively reducing thrust specific fuel consumption (TSFC), during cruise flight conditions. The test maneuvers were at stabilized flight conditions. The aircraft test engine was allowed to stabilize at the cruise conditions before data collection initiated; data were then recorded with performance seeking control (PSC) not-engaged, then data were recorded with the PSC system engaged. The maneuvers were flown back-to-back to allow for direct comparisons by minimizing the effects of variations in the test day conditions. The minimum fuel mode was evaluated at subsonic and supersonic Mach numbers and focused on three altitudes: 15,000; 30,000; and 45,000 feet. Flight data were collected for part, military, partial, and maximum afterburning power conditions. The TSFC savings at supersonic Mach numbers, ranging from approximately 4% to nearly 10%, are in general much larger than at subsonic Mach numbers because of PSC trims to the afterburner.

  10. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  11. Broadband cloaking for flexural waves

    CERN Document Server

    Zareei, Ahmad

    2016-01-01

    The governing equation for elastic waves in flexural plates is not form invariant, and hence designing a cloak for such waves faces a major challenge. Here, we present the design of a perfect broadband cloak for flexural waves through the use of a nonlinear transformation, and by matching term-by-term the original and transformed equations. For a readily achievable flexural cloak in a physical setting, we further present an approximate adoption of our perfect cloak under more restrictive physical constraints. Through direct simulation of the governing equations, we show that this cloak, as well, maintains a consistently high cloaking efficiency over a broad range of frequencies. The methodology developed here may be used for steering waves and designing cloaks in other physical systems with non form-invariant governing equations.

  12. Interpreting Flux from Broadband Photometry

    CERN Document Server

    Brown, Peter J; Roming, Peter W A; Siegel, Michael

    2016-01-01

    We discuss the transformation of observed photometry into flux for the creation of spectral energy distributions and the computation of bolometric luminosities. We do this in the context of supernova studies, particularly as observed with the Swift spacecraft, but the concepts and techniques should be applicable to many other types of sources and wavelength regimes. Traditional methods of converting observed magnitudes to flux densities are not very accurate when applied to UV photometry. Common methods for extinction and the integration of pseudo-bolometric fluxes can also lead to inaccurate results. The sources of inaccuracy, though, also apply to other wavelengths. Because of the complicated nature of translating broad-band photometry into monochromatic flux densities, comparison between observed photometry and a spectroscopic model is best done by comparing in the natural units of the observations. We recommend that integrated flux measurements be made using a spectrum or spectral energy distribution whic...

  13. Broadband fast semiconductor saturable absorber.

    Science.gov (United States)

    Jacobovitz-Veselka, G R; Kellerm, U; Asom, T

    1992-12-15

    Kerr lens mode-locked (KLM) solid-state lasers are typically not self-starting. We address this problem by introducing a broadband semiconductor saturable absorber that could be used as a tunable, all-solid-state, passive starting mechanism. We extend the wavelength tunability of a semiconductor saturable absorber to more than 100 nm using a band-gap-engineered low-temperature molecular-beam-epitaxy (MBE)-grown bulk AlGaAs semiconductor saturable absorber in which the absorption edge of the saturable absorber has been artificially broadened by continuously reducing the Al concentration during the MBE growth. We demonstrate its tunability and its feasibility as a starting mechanism for KLM with a picosecond resonant passive mode-locked Ti:sapphire laser. The extension to femtosecond KLM lasers has been discussed previously.

  14. Tuchola County Broadband Network (TCBN)

    DEFF Research Database (Denmark)

    Zabludowski, Antoni; Dubalski, B.; Zabludowski, Lukasz

    2012-01-01

    In the paper the designing project (plan) of Tuchola City broadband IP optical network has been presented. The extended version of network plan constitute technical part of network Feasibility Study, that it is expected to be implemented in Tuchola and be financed from European Regional Development...... Funds. The network plan presented in the paper contains both topological structure of fiber optic network as well as the active equipment for the network. In the project described in the paper it has been suggested to use Modular Cable System - MCS for passive infrastructure and Metro Ethernet...... technology for active equipment. The presented solution provides low cost of construction (CAPEX), ease of implementation of the network and low operating cost (OPEX). Moreover the parameters of installed Metro Ethernet switches in the network guarantee the scalability of the network for at least 10 years....

  15. Broadband synthetic aperture geoacoustic inversion.

    Science.gov (United States)

    Tan, Bien Aik; Gerstoft, Peter; Yardim, Caglar; Hodgkiss, William S

    2013-07-01

    A typical geoacoustic inversion procedure involves powerful source transmissions received on a large-aperture receiver array. A more practical approach is to use a single moving source and/or receiver in a low signal to noise ratio (SNR) setting. This paper uses single-receiver, broadband, frequency coherent matched-field inversion and exploits coherently repeated transmissions to improve estimation of the geoacoustic parameters. The long observation time creates a synthetic aperture due to relative source-receiver motion. This approach is illustrated by studying the transmission of multiple linear frequency modulated (LFM) pulses which results in a multi-tonal comb spectrum that is Doppler sensitive. To correlate well with the measured field across a receiver trajectory and to incorporate transmission from a source trajectory, waveguide Doppler and normal mode theory is applied. The method is demonstrated with low SNR, 100-900 Hz LFM pulse data from the Shallow Water 2006 experiment.

  16. High-dimensional regression with unknown variance

    CERN Document Server

    Giraud, Christophe; Verzelen, Nicolas

    2011-01-01

    We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.

  17. The Theory of Variances in Equilibrium Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  18. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  19. Fractional constant elasticity of variance model

    OpenAIRE

    Ngai Hang Chan; Chi Tim Ng

    2007-01-01

    This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.

  20. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  1. AIRTV: Broadband Direct to Aircraft

    Science.gov (United States)

    Sorbello, R.; Stone, R.; Bennett, S. B.; Bertenyi, E.

    2002-01-01

    Airlines have been continuously upgrading their wide-body, long-haul aircraft with IFE (in-flight entertainment) systems that can support from 12 to 24 channels of video entertainment as well as provide the infrastructure to enable in-seat delivery of email and internet services. This is a direct consequence of increased passenger demands for improved in-flight services along with the expectations that broadband delivery systems capable of providing live entertainment (news, sports, financial information, etc.) and high speed data delivery will soon be available. The recent events of Sept. 11 have slowed the airline's upgrade of their IFE systems, but have also highlighted the compelling need for broadband aeronautical delivery systems to include operational and safety information. Despite the impact of these events, it is estimated that by 2005 more than 3000 long haul aircraft (servicing approximately 1 billion passengers annually) will be fully equipped with modern IFE systems. Current aircraft data delivery systems, which use either Inmarsat or NATS, are lacking in bandwidth and consequently are unsuitable to satisfy passenger demands for broadband email/internet services or the airlines' burgeoning data requirements. Present live video delivery services are limited to regional coverage and are not readily expandable to global or multiregional service. Faced with a compelling market demand for high data transport to aircraft, AirTV has been developing a broadband delivery system that will meet both passengers' and airlines' needs. AirTV is a global content delivery system designed to provide a range of video programming and data services to commercial airlines. When AirTV is operational in 2004, it will provide a broadband connection directly to the aircraft, delivering live video entertainment, internet/email service and essential operational and safety data. The system has been designed to provide seamless global service to all airline routes except for those

  2. Applications of non-parametric statistics and analysis of variance on sample variances

    Science.gov (United States)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  3. The Parabolic variance (PVAR), a wavelet variance based on least-square fit

    CERN Document Server

    Vernotte, F; Bourgeois, P -Y; Rubiola, E

    2015-01-01

    The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...

  4. SCEC Broadband Platform Strong Ground Motion Simulations

    Science.gov (United States)

    Kumar, S.; Callaghan, S.; Maechling, P. J.; Olsen, K. B.; Archuleta, R. J.; Somerville, P. G.; Graves, R. W.; Jordan, T. H.; Broadband Platform Working Group

    2011-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The goal of the SCEC Broadband Simulation Platform is to generate broadband (0-10 Hz) ground motions for earthquakes using deterministic low-frequency and stochastic high-frequency simulations. SCEC developers have integrated complex scientific modules for rupture generation, low-frequency deterministic seismogram synthesis, high-frequency stochastic seismogram synthesis, and non-linear site effects calculation into a system that supports easy on-demand computation of broadband seismograms. The SCEC Broadband platform has two primary modes of operation, validation mode, and scenario mode. In validation mode, the earthquake modeling software calculates broadband seismograms for one of three earthquakes, Northridge, Loma Prieta, or Landers at sites with observed strong motion data. Then, the platform calculates goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for each event. In scenario mode, the user can specify a scenario earthquake and a list of sites and calculate ground motions at each site for the scenario event. In February 2011, SCEC released Broadband Platform 11.2 as an open-source scientific software distribution. Since that time, we have continued development of the platform by adding a new site response module and new goodness of fit measures by Mayhew and Olsen. Along with a source code distribution of the Broadband Platform, we now offer a virtual software image distribution of the platform to support its use on a variety of computing hardware and operating systems.

  5. Designing broadband plasmonic nanoantennas for ultrasensing

    Science.gov (United States)

    Yi, Zhenhuan; Wang, Kai; Voronine, Dmitri V.; Traverso, Andrew; Sokolov, Alexei

    2011-03-01

    Various designs of broadband plasmonic nanoantennas made of gold and silver nanospheres are considered and optimized for ultrasensitive spectroscopic applications. The simulated nanostructures show a broadband optical response which may be tuned by varying the size, position and composition of nanospheres. Near-field enhancement in nanoantenna hot spots is analyzed and compared with previous literature results in the case of a fractal plasmonic nanolens. Broadband plasmonic nanoantennas may allow detecting ultrasmall concentrations of toxic materials and may be used for decoding DNA and for ultrafast nanophotonics applications.

  6. Enhancements to INO's broadband SWIR/MWIR spectroscopic lidar

    Science.gov (United States)

    Lambert-Girard, Simon; Babin, François; Allard, Martin; Piché, Michel

    2013-09-01

    Recent advances in the INO broadband SWIR/MWIR spectroscopic lidar will be presented. The system is designed for the detection of gaseous pollutants via active infrared differential optical absorption spectroscopy (DOAS). Two distinctive features are a sub-nanosecond PPMgO:LN OPG capable of generating broadband (10 to plane array used in the output plane of a grating spectrograph. The operation consists in closely gating the returns from back-scattering off topographic features, and is thus, for now, a path integrated measurement. All wavelengths are emitted and received simultaneously, for low concentration measurements and DOAS fitting methods are then applied. The OPG approach enables the generation of moderate FWHM continua with high spectral energy density and tunable to absorption features of many molecules. Recent measurements demonstrating a minimum sensitivity of 10 ppm-m for methane around 3.3 μm with ˜ 2 mW average power in less than 10 seconds will be described. Results of enhancements to the laser source using small or large bandwidth seeds constructed from telecom off-the-shelf components indicate that the OPG output spectral energy density can have controllable spectral widths and shapes. It also has a slightly more stable spectral shape from pulse to pulse than without the seed (25 % enhancement). Most importantly, the stabilized output spectra will allow more sensitive measurements.

  7. 47 CFR 90.1405 - Shared wireless broadband network.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Shared wireless broadband network. 90.1405... broadband network. The Shared Wireless Broadband Network developed by the 700 MHz Public/Private Partnership must be designed to meet requirements associated with a nationwide, public safety broadband network....

  8. 78 FR 17432 - Kiewit Power Constructors Co. et al.; Application for a Permanent Variance and Request for Comments

    Science.gov (United States)

    2013-03-21

    ...; and (3) any views or arguments on any issue of fact or law presented in the variance application. ] I... located in the car; (c)(14)(i)--Using a minimum of two wire ropes for drum hoisting; and (c)(16)--Material... Federal OSHA. Kentucky stated that its statutory law requires affected employers to apply to the state...

  9. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  10. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...... where the methods of stratification, importance sampling and quasi Monte Carlo are investigated....

  11. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  12. A relation between information entropy and variance

    CERN Document Server

    Pandey, Biswajit

    2016-01-01

    We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.

  13. Analysis of the Proposed Ghana Broadband Strategy

    DEFF Research Database (Denmark)

    Williams, Idongesit; Botwe, Yvonne

    This project studied the Ghana Broadband Strategy with the aim of evaluating the recommendations in the strategy side by side the broadband development in Ghana. The researchers conducted interviews both officially and unofficially with ICT stakeholders, made observations, studied Government...... publications and sourced information from the internet in order to find out the extent of broadband development in Ghana. A SWOT analysis is carried out to determine the strengths, weaknesses, opportunities and threat to the development of broadband market in Ghana. The facilitation, regulatory and market...... the market. It is the hope of the researchers that this academic exercise will be useful to anyone who wishes to study the policy effect on the Ghanaian telecommunications market and the Ghanaian approach to Universal Access and Service....

  14. Analyzing Broadband Divide in the Farming Sector

    DEFF Research Database (Denmark)

    Jensen, Michael; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2013-01-01

    Agriculture industry has been evolving for centuries. Currently, the technological development of Internet oriented farming tools allows to increase the productivity and efficiency of this sector. Many of the already available tools and applications require high bandwidth in both directions......, upstream and downstream connection. The main constraint is that farms are naturally located in rural areas where the required access broadband data rates are not available. This paper studies the broadband divide in relation to the Danish agricultural sector. Results show how there is an important...... difference between the broadband availability for farms and the rest of the households/buildings the country. This divide may be slowing down the potential technological development of the farming industry, in order to keep their competitiveness in the market. Therefore, broadband development in rural areas...

  15. Wireless Broadband Access and Accounting Schemes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, we propose two wireless broadband access and accounting schemes. In both schemes, the accounting system adopts RADIUS protocol, but the access system adopts SSH and SSL protocols respectively.

  16. Nanophotonic Design for Broadband Light Management

    Energy Technology Data Exchange (ETDEWEB)

    Kosten, Emily; Callahan, Dennis; Horowitz, Kelsey; Pala, Ragip; Atwater, Harry

    2014-10-13

    We describe nanophotonic design approaches for broadband light management including i) crossed-trapezoidal Si structures ii) Si photonic crystal superlattices, and iii) tapered and inhomogeneous diameter III-V/Si nanowire arrays.

  17. Broadband unidirectional cloak designed by eikonal theory.

    Science.gov (United States)

    Liu, Xuan; Wu, Xiaojia; Zhang, Luoning; Zhou, Jing

    2015-11-02

    A method for designing optical device is derived based on the eikonal theory, which could obtain the eikonal distribution on a curved surface according to the propagation characteristics of the subsequent light wave. Then combining with the phase matching condition, we designed a broadband unidirectional cloak. Different from the reported unidirectional cloaks, the proposed one could be used for coherent wave and has continuous broadband performance. Moreover, it has three cloaked regions. Full-wave simulation results verify the properties of the cloak.

  18. Multi-Mode Broadband Patch Antenna

    Science.gov (United States)

    Romanofsky, Robert R. (Inventor)

    2001-01-01

    A multi-mode broad band patch antenna is provided that allows for the same aperture to be used at independent frequencies such as reception at 19 GHz and transmission at 29 GHz. Furthermore, the multi-mode broadband patch antenna provides a ferroelectric film that allows for tuning capability of the multi-mode broadband patch antenna over a relatively large tuning range. The alternative use of a semiconductor substrate permits reduced control voltages since the semiconductor functions as a counter electrode.

  19. Is European Broadband Ready for Smart Grid?

    DEFF Research Database (Denmark)

    Balachandran, Kartheepan; Pedersen, Jens Myrup

    2014-01-01

    In this short paper we compare the communication requirements for three Smart Grid scenarios with the availability of broadband and mobile communication networks in Europe. We show that only in the most demanding case - where data is collected and transmitted every second - a standard GSM....../GPRS connection is not enough. Whereas in the less demanding scenarios it is almost all of the European households that can be covered by a standard broadband technology for use with Smart Grid....

  20. Broadband mode conversion via gradient index metamaterials.

    Science.gov (United States)

    Wang, HaiXiao; Xu, YaDong; Genevet, Patrice; Jiang, Jian-Hua; Chen, HuanYang

    2016-04-21

    We propose a design for broadband waveguide mode conversion based on gradient index metamaterials (GIMs). Numerical simulations demonstrate that the zeroth order of transverse magnetic mode or the first order of transverse electric mode (TM0/TE1) can be converted into the first order of transverse magnetic mode or the second order of transverse electric mode (TM1/TE2) for a broadband of frequencies. As an application, an asymmetric propagation is achieved by integrating zero index metamaterials inside the GIM waveguide.

  1. Popularity at Minimum Cost

    CERN Document Server

    Kavitha, Telikepalli; Nimbhorkar, Prajakta

    2010-01-01

    We consider an extension of the {\\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is...

  2. Broadband direct RF digitization receivers

    CERN Document Server

    Jamin, Olivier

    2014-01-01

    This book discusses the trade-offs involved in designing direct RF digitization receivers for the radio frequency and digital signal processing domains.  A system-level framework is developed, quantifying the relevant impairments of the signal processing chain, through a comprehensive system-level analysis.  Special focus is given to noise analysis (thermal noise, quantization noise, saturation noise, signal-dependent noise), broadband non-linear distortion analysis, including the impact of the sampling strategy (low-pass, band-pass), analysis of time-interleaved ADC channel mismatches, sampling clock purity and digital channel selection. The system-level framework described is applied to the design of a cable multi-channel RF direct digitization receiver. An optimum RF signal conditioning, and some algorithms (automatic gain control loop, RF front-end amplitude equalization control loop) are used to relax the requirements of a 2.7GHz 11-bit ADC. A two-chip implementation is presented, using BiCMOS and 65nm...

  3. Interpreting Flux from Broadband Photometry

    Science.gov (United States)

    Brown, Peter J.; Breeveld, Alice; Roming, Peter W. A.; Siegel, Michael

    2016-10-01

    We discuss the transformation of observed photometry into flux for the creation of spectral energy distributions (SED) and the computation of bolometric luminosities. We do this in the context of supernova studies, particularly as observed with the Swift spacecraft, but the concepts and techniques should be applicable to many other types of sources and wavelength regimes. Traditional methods of converting observed magnitudes to flux densities are not very accurate when applied to UV photometry. Common methods for extinction and the integration of pseudo-bolometric fluxes can also lead to inaccurate results. The sources of inaccuracy, though, also apply to other wavelengths. Because of the complicated nature of translating broadband photometry into monochromatic flux densities, comparison between observed photometry and a spectroscopic model is best done by forward modeling the spectrum into the count rates or magnitudes of the observations. We recommend that integrated flux measurements be made using a spectrum or SED which is consistent with the multi-band photometry rather than converting individual photometric measurements to flux densities, linearly interpolating between the points, and integrating. We also highlight some specific areas where the UV flux can be mischaracterized.

  4. Social Security's special minimum benefit.

    Science.gov (United States)

    Olsen, K A; Hoffmeyer, D

    Social Security's special minimum primary insurance amount (PIA) provision was enacted in 1972 to increase the adequacy of benefits for regular long-term, low-earning covered workers and their dependents or survivors. At the time, Social Security also had a regular minimum benefit provision for persons with low lifetime average earnings and their families. Concerns were rising that the low lifetime average earnings of many regular minimum beneficiaries resulted from sporadic attachment to the covered workforce rather than from low wages. The special minimum benefit was seen as a way to reward regular, low-earning workers without providing the windfalls that would have resulted from raising the regular minimum benefit to a much higher level. The regular minimum benefit was subsequently eliminated for workers reaching age 62, becoming disabled, or dying after 1981. Under current law, the special minimum benefit will phase out over time, although it is not clear from the legislative history that this was Congress's explicit intent. The phaseout results from two factors: (1) special minimum benefits are paid only if they are higher than benefits payable under the regular PIA formula, and (2) the value of the regular PIA formula, which is indexed to wages before benefit eligibility, has increased faster than that of the special minimum PIA, which is indexed to inflation. Under the Social Security Trustees' 2000 intermediate assumptions, the special minimum benefit will cease to be payable to retired workers attaining eligibility in 2013 and later. Their benefits will always be larger under the regular benefit formula. As policymakers consider Social Security solvency initiatives--particularly proposals that would reduce benefits or introduce investment risk--interest may increase in restoring some type of special minimum benefit as a targeted protection for long-term low earners. Two of the three reform proposals offered by the President's Commission to Strengthen

  5. Dimension reduction based on weighted variance estimate

    Institute of Scientific and Technical Information of China (English)

    ZHAO JunLong; XU XingZhong

    2009-01-01

    In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.

  6. Dimension reduction based on weighted variance estimate

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.

  7. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

    Institute of Scientific and Technical Information of China (English)

    Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

    2015-01-01

    In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the efficient strategies and efficient frontier) are derived explicitly.

  8. On Eliminating The Scrambling Variance In Scrambled Response Models

    Directory of Open Access Journals (Sweden)

    Zawar Hussain

    2012-06-01

    Full Text Available To circumvent the response bias in sensitive surveys randomized response models are being used. To add into it we propose an improved response model utilizing both the additive and multiplicative scrambling method. The proposed model provides greater flexibility in terms of fixing the constantKdepending upon the guessed distribution of sensitive variable and nature of the population. The proposed model yields an unbiased estimator and is anticipated as more protective against the privacy of the respondents. The relative efficiency comparison of the proposed estimator is made relative to Hussain and Shabbir (2007 RRM. Furthermore, the proposed model itself is improved by taking the two responses from each respondent and suggesting a weighted estimator yielding an unbiased estimator having the minimum possible sampling variance. The suggested weighted estimator is unconditionally more efficient than all of the suggested estimators until now. Future research may be focused on privacy protection provided by the scrambling models. More scrambling models may be identified and improved by taking the two responses from each respondent in such a way that the scrambling effect is balanced out.

  9. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  10. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  11. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    Science.gov (United States)

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.

  12. Delivery of satellite based broadband services

    Science.gov (United States)

    Chandrasekhar, M. G.; Venugopal, D.

    2007-06-01

    Availability of speedy communication links to individuals and organizations is essential to keep pace with the business and social requirements of this modern age. While the PCs have been continuously growing in processing speed and memory capabilities, the availability of broadband communication links still has not been satisfactory in many parts of the world. Recognizing the need to give fillip to the growth of broadband services and improve the broadband penetration, the telecom policies of different counties have placed special emphasis on the same. While emphasis is on the use of fiber optic and copper in local loop, satellite communications systems will play an important role in quickly establishing these services in areas where fiber and other communication systems are not available and are not likely to be available for a long time to come. To make satellite communication systems attractive for the wide spread of these services in a cost effective way special emphasis has to be given on factors affecting the cost of the bandwidth and the equipment. As broadband services are bandwidth demanding, use of bandwidth efficient modulation technique and suitable system architecture are some of the important aspects that need to be examined. Further there is a need to re-look on how information services are provided keeping in view the user requirements and broadcast capability of satellite systems over wide areas. This paper addresses some of the aspects of delivering broadband services via satellite taking Indian requirement as an example.

  13. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... sound operations.\\4\\ Also, section 1362(e) provides for additional capital and reserve requirements to... provide additional authorities for FHFA regarding minimum capital requirements. Section 1362(a... section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to the eleven banks that...

  14. Minimum signals in classical physics

    Institute of Scientific and Technical Information of China (English)

    邓文基; 许基桓; 刘平

    2003-01-01

    The bandwidth theorem for Fourier analysis on any time-dependent classical signal is shown using the operator approach to quantum mechanics. Following discussions about squeezed states in quantum optics, the problem of minimum signals presented by a single quantity and its squeezing is proposed. It is generally proved that all such minimum signals, squeezed or not, must be real Gaussian functions of time.

  15. Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River

    Science.gov (United States)

    Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.

    2002-05-01

    We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.

  16. Energy efficient evolution of mobile broadband networks

    Energy Technology Data Exchange (ETDEWEB)

    Micallef, G.

    2013-04-15

    existing macro base station sites together with the deployment of outdoor or indoor small cells (heterogeneous network) provide the best compromise between performance and power consumption. Focusing on one of the case studies, it is noted that the upgrade of both HSPA and LTE network layers results in the power consumption of the network increasing by a factor of 4. When coupled with the growth in capacity introduced by the various upgrades (x50), the efficiency of the network is still greatly improved. Over the evolution period, the stated increase in power consumption does not consider improvement in base station equipment. By considering a number of different equipment versions, the evolution study is further extended to also include the impact of replacing old equipment. Results show that an aggressive replacement strategy and the upgrade of sites to remote radio head can restrain the increase in power consumption of the network to just 17%. In addition to upgrading equipment, mobile network operators can further reduce power consumption by enabling a number of power saving features. These features often exploit redundancies within the network and/or the variation in traffic over a daily period. An example of such feature is sleep mode, which allows for base station sites to be systematically powered down during hours with low network traffic. While dependent on the traffic profile, within an urban area sleep mode can reduce the daily energy consumption of the network by around 20%. In addition to the different variances of sleep mode, the potential savings of other features are also described. Selecting a power efficient network capacity evolution path, replacing old and less efficient equipment, and enabling power saving features, can all considerably reduce the power consumption of future mobile broadband networks. Studies and recommendations presented within this thesis demonstrate that it is realistic for mobile network operators to boost network capacity by a

  17. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    Science.gov (United States)

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.

  18. Standardization of broadband radio access networks

    Science.gov (United States)

    Kruys, Jan; Haine, John

    1998-09-01

    This paper introduces the ETSI Project on Broadband Radio Access Networks (EP-BRAN). BRAN systems will be used for local area applications with limited mobility (HIPERLAN/2); fixed access with area coverage in urban and rural areas (HIPERACCESS); and short range high-speed point-to-point links (HIPERLINK). They will support transport of either IP or ATM protocols, supporting managed quality of service. Such systems are needed to provide access to the future broadband core networks supporting multi-media applications. The paper addresses the motivation and market demand for broadband radio access networks, the objectives and scope of the Project, the operational and technical requirements, the types of networks to be standardized, the scope of the standards, the issue of spectrum and the Project schedule.

  19. Magnetically levitated autoparametric broadband vibration energy harvesting

    Science.gov (United States)

    Kurmann, L.; Jia, Y.; Manoli, Y.; Woias, P.

    2016-11-01

    Some of the lingering challenges within the current paradigm of vibration energy harvesting (VEH) involve narrow operational frequency range and the inevitable non-resonant response from broadband noise excitations. Such VEHs are only suitable for limited applications with fixed sinusoidal vibration, and fail to capture a large spectrum of the real world vibration. Various arraying designs, frequency tuning schemes and nonlinear vibratory approaches have only yielded modest enhancements. To fundamentally address this, the paper proposes and explores the potentials in using highly nonlinear magnetic spring force to activate an autoparametric oscillator, in order to realize an inherently broadband resonant system. Analytical and numerical modelling illustrate that high spring nonlinearity derived from magnetic levitation helps to promote the 2:1 internal frequency matching required to activate parametric resonance. At the right internal parameters, the resulting system can intrinsically exhibit semi-resonant response regardless of the bandwidth of the input vibration, including broadband white noise excitation.

  20. Service Differentiation in Residential Broadband Networks

    DEFF Research Database (Denmark)

    Sigurdsson, Halldór Matthias

    2004-01-01

    As broadband gains widespread adoption with residential users, revenue generating voice- and video-services have not yet taken off. This slow uptake is often attributed to lack of Quality of Service management in residential broadband networks. To resolve this and induce service variety, network...... access providers are implementing service differentiation in their networks where voice and video gets prioritised before data. This paper discusses the role of network access providers in multipurpose packet based networks and the available migration strategies for supporting multimedia services...... in digital subscriber line (DSL) based residential broadband networks. Four possible implementation scenarios and their technical characteristics and effects are described. To conclude, the paper discusses how network access providers can be induced to open their networks for third party service providers....

  1. Design and performance of a broadband circularly polarized modified semi-elliptical microstrip patch antenna

    Science.gov (United States)

    Sharma, Brajraj; Sharma, Vijay; Tiwari, Ajay; Sharma, K. B.; Bhatnagar, Deepak

    2013-01-01

    In this communication design and performance of a modified semi elliptical microstrip patch antenna is proposed to achieve circularly polarized broadband performance. The proposed structure consists of a semi-elliptical patch having a D-shaped slot designed on three layered substrate material. The structure has two FR-4 substrates separated by a foam material having 1 mm thickness. The simulation analysis is carried out by using IE3D simulation software. The proposed antenna covers entire median band (3.4 to 3.69 GHz) allocated for Wi-Max communication systems. Two modes having resonance frequencies very close to each other (3.36 GHz and 3.66 GHz) are excited to achieve broadband performance. The impedance bandwidth of proposed antenna is close to 21%. The minimum axial ratio is close to 1.8dB while axial ratio bandwidth is close to 4.63%. The radiation patterns within bandwidth are almost identical in shape.

  2. Broadband transceiver design of distributed amplify-and-forward MIMO relays in correlated channels

    Science.gov (United States)

    Hu, Chia-Chang; Tang, Kang-Tsao

    2013-12-01

    Combined optimization of the source precoder, relay weighting matrices, and destination decoder is proposed in dual-hop amplify-and-forward (AF) multiple-input multiple-output (MIMO) multiple-relay networks with the source-to-destination link in correlated channels. This broadband cooperative transceiver design is studied based on the minimum mean-squared error (MMSE) criterion under correlated fading channels. The optimization problem belongs neither concave nor convex so that an iterative nonlinear matrix conjugate gradient (MCG) search algorithm is applied to explore local optimal solutions. Simulation results show that the broadband cooperative transceiver joint architecture performs better the non-cooperative transceiver design in terms of the bit-error-rate (BER).

  3. Participation in the broadband society in Denmark

    DEFF Research Database (Denmark)

    Falch, Morten; Henten, Anders; Skouby, Knud Erik

    2009-01-01

    The purpose of the paper is to provide an empirical overview of broadband developments in Denmark. The overview includes sections on coverage and penetration, connection speeds, retail prices, competition, interconnection prices, and residential access to Internet. The documentation shows that De...... explanation is not that they cannot afford it but that they don't need it. Still, there is an issue with respect to the participation in the broadband society, when an increasing part of communications in society is based on the Internet....

  4. Broadband Polarizers Based on Graphene Metasurfaces

    CERN Document Server

    Guo, Tianjing

    2016-01-01

    We present terahertz (THz) metasurfaces based on aligned rectangular graphene patches placed on top of a dielectric layer to convert the transmitted linearly polarized waves to circular or elliptical polarized radiation. Our results lead to the design of an ultrathin broadband THz quarter-wave plate. In addition, ultrathin metasurfaces based on arrays of L-shaped graphene periodic patches are demonstrated to achieve broadband cross-polarization transformation in reflection and transmission. The proposed metasurface designs have tunable responses and are envisioned to become the building blocks of several integrated THz systems.

  5. A polarization-independent broadband terahertz absorber

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Cheng; Zang, XiaoFei, E-mail: xfzang@usst.edu.cn, E-mail: ymzhu@usst.edu.cn; Wang, YiQiao; Chen, Lin; Cai, Bin; Zhu, YiMing, E-mail: xfzang@usst.edu.cn, E-mail: ymzhu@usst.edu.cn [Shanghai Key Laboratory of Modern Optical System and Engineering Research Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093 (China)

    2014-07-21

    A highly efficient broadband terahertz absorber is designed, fabricated, and experimentally as well as theoretically evaluated. The absorber comprises a heavily doped silicon substrate and a well-designed two-dimensional grating. Due to the destructive interference of waves and diffraction, the absorber can achieve over 95% absorption in a broad frequency range from 1 to 2 THz and for angles of incidence from 0° to 60°. Such a terahertz absorber is also polarization-independent due to its symmetrical structure. This omnidirectional and broadband absorber have potential applications in anti-reflection coatings, imaging systems, and so on.

  6. Noise radar with broadband microwave ring correlator

    Science.gov (United States)

    Susek, Waldemar; Stec, Bronislaw

    2011-06-01

    A principle of quadrature correlation detection of noise signals using an analog broadband microwave correlator is presented in the paper. Measurement results for the correlation function of noise signals are shown and application of such solution in the noise radar for precise determination of distance changes and velocity of these changes is also presented. Results for short range noise radar operation are presented both for static and moving objects. Experimental results using 2,6 - 3,6 GHz noise like waveform for the signal from a breathing human is presented. Conclusions and future plans for applications of presented detection technique in broadband noise radars bring the paper to an end.

  7. VT Total Broadband Availability by Census Block - 06-2013

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201306 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2013. This...

  8. VT Public Locations of Broadband Data - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  9. VT Public Locations of Broadband Data - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  10. VT Total Broadband Availability by Census Block - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  11. VT Cable Broadband Availability by Census Block - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  12. VT Cable Broadband Availability by Census Block - 12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  13. VT Wireless Broadband Availability by Census Block - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  14. VT Total Broadband Availability by Census Block - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  15. VT Wireline Broadband Availability by Census Block - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  16. VT Detailed Broadband Availability by Census Block -12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  17. VT DSL Broadband Availability by Census Block - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  18. VT Detailed Broadband Availability by Census Block -12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  19. VT Wireline Broadband Availability by Census Block - 12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  20. VT Cable Broadband Availability by Census Block - 12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  1. VT Detailed Broadband Availability by Census Block - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  2. VT Detailed Broadband Availability by Census Block - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  3. VT DSL Broadband Availability by Census Block - 12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  4. VT Total Broadband Availability by Census Block - 12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  5. VT Wireline Broadband Availability by Census Block - 12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  6. VT Public Locations of Broadband Data - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  7. VT Wireless Broadband Availability by Census Block - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  8. VT DSL Broadband Availability by Census Block - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  9. VT Detailed Broadband Availability by Census Block - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  10. VT Wireline Broadband Availability by Census Block - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  11. VT Cable Broadband Availability by Census Block - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  12. VT Wireless Broadband Availability by Census Block - 12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  13. VT Wireless Broadband Availability by Census Block - 12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  14. VT Public Locations of Broadband Data - 12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  15. VT Wireline Broadband Availability by Census Block - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  16. VT Cable Broadband Availability by Census Block - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  17. VT DSL Broadband Availability by Census Block - 12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  18. VT Total Broadband Availability by Census Block - 12-2012

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201212 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2012. This...

  19. VT DSL Broadband Availability by Census Block - 06-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201006 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2010. This...

  20. VT Total Broadband Availability by Census Block - 12-2010

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201012 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2010. This...

  1. VT Public Locations of Broadband Data - 12-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201112 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 12/31/2011. This...

  2. VT Wireless Broadband Availability by Census Block - 06-2011

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VTBB201106 VT Broadband Availability Dataset represents wireline and wireless 'broadband service' availability in VT as of 6/30/2011. This...

  3. Minimum length-maximum velocity

    Science.gov (United States)

    Panes, Boris

    2012-03-01

    We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.

  4. Simulation of Broadband Time Histories Combining Deterministic and Stochastic Methodologies

    Science.gov (United States)

    Graves, R. W.; Pitarka, A.

    2003-12-01

    We present a methodology for generating broadband (0 - 10 Hz) ground motion time histories using a hybrid technique that combines a stochastic approach at high frequencies with a deterministic approach at low frequencies. Currently, the methodology is being developed for moderate and larger crustal earthquakes, although the technique can theoretically be applied to other classes of events as well. The broadband response is obtained by summing the separate responses in the time domain using matched butterworth filters centered at 1 Hz. We use a kinematic description of fault rupture, incorporating spatial heterogeneity in slip, rupture velocity and rise time by discretizing an extended finite-fault into a number of smaller subfaults. The stochastic approach sums the response for each subfault assuming a random phase, an omega-squared source spectrum and simplified Green's functions (Boore, 1983). Gross impedance effects are incorporated using quarter wavelength theory (Boore and Joyner, 1997) to bring the response to a generic baserock level (e.g., Vs = 1000 m/s). The deterministic approach sums the response for many point sources distributed across each subfault. Wave propagation is modeled using a 3D viscoelastic finite difference algorithm with the minimum shear wave velocity set at 620 m/s. Short- and mid-period amplification factors provided by Borcherdt (1994) are used to develop frequency dependent site amplification functions. The amplification functions are applied to the stochastic and determinsitic responses separately since these may have different (computational) reference site velocities. The site velocity is taken as the measured or estimated value of {Vs}30. The use of these amplification factors is attractive because they account for non-linear response by considering the input acceleration level. We note that although these design factors are strictly defined for response spectra, we have applied them to the Fourier amplitude spectra of our

  5. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  6. Functional analysis of variance for association studies.

    Directory of Open Access Journals (Sweden)

    Olga A Vsevolozhskaya

    Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  7. Change in triple play requirements and evolving broadband networks

    NARCIS (Netherlands)

    Wolfswinkel, R.N. van

    2005-01-01

    In the current broadband market there are many new developments regarding both services and the broadband infrastructures carrying those services. This paper will address what the requirements of services are towards broadband networks and how this will evolve over time. The focus will be on the req

  8. Requirements of triple play services towards broadband access networks

    NARCIS (Netherlands)

    Wolfswinkel, R.N. van

    2005-01-01

    In the current broadband market there are many new developments regarding both services and the broadband infrastructures carrying those services. This paper will address what the requirements of services are towards broadband networks. The focus will be on the requirements of triple play related se

  9. Against a Minimum Voting Age

    OpenAIRE

    Cook, Philip

    2013-01-01

    A minimum voting age is defended as the most effective and least disrespectful means of ensuring all members of an electorate are sufficiently competent to vote. Whilst it may be reasonable to require competency from voters, a minimum voting age should be rejected because its view of competence is unreasonably controversial, it is incapable of defining a clear threshold of sufficiency and an alternative test is available which treats children more respectfully. This alternative is a procedura...

  10. EMERGING BROADBAND WIRELESS TECHNOLOGIES: WIFI AND WIMAX

    Directory of Open Access Journals (Sweden)

    Rama K. Raju

    2012-07-01

    Full Text Available Now-a-days there is high demand for broadband mobile services. Traditional high-speed broadband solutions depend on wired technologies namely digital subscriber line (DSL. Wifi and Wimax are useful in providing any type of connectivity such as the fixed or portable or nomadic connectivity without the requirement of LoS (Line of Sight of the base station. Mobile Broadband Wireless Network (MBWN is a flexible and economical solution for remote areas where wired technology and also terminal mobility cannot be provided. The IEEE Wi-Fi and Wi-Max/802.16 are the most promising technologies for broadband wireless metropolitan area networks (WMANs and these are capable of providing high throughput even on long distances with varied QoS. These technologies ensure a wireless network that enables high speed Internet access to residential, small and medium business customers, as well as Internet access for WiFi hot spots and cellular base stations. These offer support to both point-to-multipoint (P2MP and multipoint-to-multipoint (mesh nodes and offers high speed data (voice, video service to the customers. In this paper, we study the issues related to, benefits and deployment of these technologies.

  11. A THEORY FOR BROADBAND VARACTOR PARAMETRIC AMPLIFIERS

    Science.gov (United States)

    design and synthesis of broadband varactor parametric amplifiers. The circuit considered in this thesis is that of linear variable capacitors embedded...second and more important inherent property is that, due to the frequency-coupling action of the variable capacitor , the scattering coefficient at the

  12. Broadband Satellite Technologies and Markets Assessed

    Science.gov (United States)

    Wallett, Thomas M.

    1999-01-01

    The current usage of broadband (data rate greater than 64 kilobits per second (kbs)) for multimedia network computer applications is increasing, and the need for network communications technologies and systems to support this use is also growing. Satellite technology will likely be an important part of the National Information Infrastructure (NII) and the Global Information Infrastructure (GII) in the next decade. Several candidate communications technologies that may be used to carry a portion of the increased data traffic have been reviewed, and estimates of the future demand for satellite capacity have been made. A study was conducted by the NASA Lewis Research Center to assess the satellite addressable markets for broadband applications. This study effort included four specific milestones: (1) assess the changing nature of broadband applications and their usage, (2) assess broadband satellite and terrestrial technologies, (3) estimate the size of the global satellite addressable market from 2000 to 2010, and (4) identify how the impact of future technology developments could increase the utility of satellite-based transport to serve this market.

  13. The GREGOR Broad-Band Imager

    Science.gov (United States)

    von der Lühe, O.; Volkmer, R.; Kentischer, T. J.; Geißler, R.

    2012-11-01

    The design and characteristics of the Broad-Band Imager (BBI) of GREGOR are described. BBI covers the visible spectral range with two cameras simultaneously for a large field and with critical sampling at 390 nm, and it includes a mode for observing the pupil in a Foucault configuration. Samples of first-light observations are shown.

  14. FMCW Radar with Broadband Communication Capability

    NARCIS (Netherlands)

    Barrenechea, P.; Elferink, F.H.; Janssen, J.A.A.J.

    2007-01-01

    The use of amplitude modulation to encode information onto an FMCW radar signal is proposed in this paper. This new technique, that has been named AM-FMCW communicating radar, provides a new channel for broadband communication by reusing the radar frequencies and without introducing any distortion i

  15. BROADBAND TRAVELLING WAVE SEMICONDUCTOR OPTICAL AMPLIFIER

    DEFF Research Database (Denmark)

    2010-01-01

    Broadband travelling wave semiconductor optical amplifier (100, 200, 300, 400, 800) for amplification of light, wherein the amplifier (100, 200, 300, 400, 800) comprises a waveguide region (101, 201, 301, 401, 801) for providing confinement of the light in transverse directions and adapted...

  16. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  17. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  18. 40 CFR 190.11 - Variances for unusual operations.

    Science.gov (United States)

    2010-07-01

    ... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...

  19. Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.

    Science.gov (United States)

    Ashby, Neil; Patla, Bijunath

    2016-04-01

    Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.

  20. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  1. Management of broadband technology and innovation policy, deployment, and use

    CERN Document Server

    Choudrie, Jyoti

    2013-01-01

    When one considers broadband, the Internet immediately springs to mind. However, broadband is impacting society in many ways. For instance, broadband networks can be used to deliver healthcare or community related services to individuals who don't have computers, have distance as an issue to contend with, or don't use the internet. Broadband can support better management of scarce energy resources with the advent of smart grids, enables improved teleworking capacity and opens up a world of new entertainment possibilities. Yet scholarly examinations of broadband technology have so far examin

  2. Broad-Band Visually Evoked Potentials: Re(convolution in Brain-Computer Interfacing.

    Directory of Open Access Journals (Sweden)

    Jordy Thielen

    Full Text Available Brain-Computer Interfaces (BCIs allow users to control devices and communicate by using brain activity only. BCIs based on broad-band visual stimulation can outperform BCIs using other stimulation paradigms. Visual stimulation with pseudo-random bit-sequences evokes specific Broad-Band Visually Evoked Potentials (BBVEPs that can be reliably used in BCI for high-speed communication in speller applications. In this study, we report a novel paradigm for a BBVEP-based BCI that utilizes a generative framework to predict responses to broad-band stimulation sequences. In this study we designed a BBVEP-based BCI using modulated Gold codes to mark cells in a visual speller BCI. We defined a linear generative model that decomposes full responses into overlapping single-flash responses. These single-flash responses are used to predict responses to novel stimulation sequences, which in turn serve as templates for classification. The linear generative model explains on average 50% and up to 66% of the variance of responses to both seen and unseen sequences. In an online experiment, 12 participants tested a 6 × 6 matrix speller BCI. On average, an online accuracy of 86% was reached with trial lengths of 3.21 seconds. This corresponds to an Information Transfer Rate of 48 bits per minute (approximately 9 symbols per minute. This study indicates the potential to model and predict responses to broad-band stimulation. These predicted responses are proven to be well-suited as templates for a BBVEP-based BCI, thereby enabling communication and control by brain activity only.

  3. Local government broadband policies for areas with limited Internet access

    Directory of Open Access Journals (Sweden)

    Yoshio Arai

    2014-03-01

    Full Text Available Despite their wide diffusion in developed countries, broadband services are still limited in areas where providing them is not profitable for private telecom carriers. To address this, many local governments in Japan have implemented broadband deployment projects subsidized by the national government. In this paper, we discuss local government broadband policies based on survey data collected from municipalities throughout the country. With the support of national promotion policies, broadband services were rapidly introduced to most local municipalities in Japan during the 2000s. Local government deployment policies helped to reduce the number of areas with no broadband access. A business model based on the Indefeasible Right of Use (IRU contract between a private telecom carrier and a local government has been developed in recent years. Even local governments without the technical capacity to operate a broadband business can introduce broadband services into their territory using the IRU business model.

  4. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    Science.gov (United States)

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  5. Minimum Q Electrically Small Antennas

    DEFF Research Database (Denmark)

    Kim, O. S.

    2012-01-01

    Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for the stored energies obtained through the vector spherical wave theory, it is shown that a magnetic-coated metal core reduces the internal stored energy of both TM1m and TE1m modes simultaneously, so that a self-resonant antenna with the Q approaching the fundamental minimum is created. Numerical results...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....

  6. Assessing the Minimum Number of Synchronization Triggers Necessary for Temporal Variance Compensation in Commercial Electroencephalography (EEG) Systems

    Science.gov (United States)

    2012-09-01

    electroencephalography (EEG) recording systems. The four systems examined, Emotiv’s EPOC , Biosemi’s ActiveTwo, Advanced Brain Monitoring’s B-Alert X10...and Quasar’s prototype represent different approaches to the problem of recording brain activity in human subjects. We found that the EPOC introduces...drift with EPOC system is very large. A) The error between the trigger being logged by the DAQ and when it was sent is on the order of hundreds of

  7. 一般最小方差组合投资权系数%Generalized Minimum-Variance-Portfolio Weights

    Institute of Scientific and Technical Information of China (English)

    N.L. Kennedy; 朱允民

    2004-01-01

    组合投资优化在组合投资管理中被广泛研究,在研究中,一般使用的是拉格朗日乘子法.然而,这一方法有某些限制:其基本假设是回报的方差阵是正定的,这使得该方法不能在一般情况下使用.本文作者的目标是应用二次优化理论以获得一般情况下的最优权系数,所得结果突破了前述的方差阵的限制.%Portfolio weights optimization has been extensively studied in the literature of portfolio management. The commonly used method is the Lagrange multiplier; however, this approach has some limitations: the fundamental assumption in this approach is that the covariance matrix of returns is positive definite, which renders the method not applicable in general. In this paper, the authors aim to use quadratic optimization theory in obtaining generalized optimal weights, whereby, the restriction on the covariance matrix is just a mere special case.

  8. Delivery Time Variance Reduction in the Military Supply Chain

    Science.gov (United States)

    2010-03-01

    DELIVERY TIME VARIANCE REDUCTION IN THE MILITARY SUPPLY CHAIN THESIS...IN THE MILITARY SUPPLY CHAIN THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering...March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-OR-MS-ENS-10-02 DELIVERY TIME VARIANCE IN THE MILITARY SUPPLY CHAIN Preston

  9. An Analysis of Variance Framework for Matrix Sampling.

    Science.gov (United States)

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  10. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  11. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  12. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  13. Performance of medical students admitted via regular and admission-variance routes.

    Science.gov (United States)

    Simon, H J; Covell, J W

    1975-03-01

    Twenty-three medical students from socioeconomically disadvantaged backgrounds and drawn chiefly from Chicano and black racial minority groups were granted admission variances to the University of California, San Diego, School of Medicine in 1970 and 1971. This group was compared with 21 regularly admitted junior and senoir medical students with respect to: specific admissions criteria (Medical College Admission Test scores, grade-point average, and college rating score); scores, on Part I of the examinations of the National Board of Medical Examiners (NBME); and performance in at least two of the medicine, surgery, and pediatrics clerkships. The two populations differed markedly on admission. The usual screen would have precluded admission of all but one of the students granted variances. At the end of the second year, average NBME Part I scores again identified two distinct populations, but the average scores of both groups were clearly above the minimum passing level. The groups still differ on analysis of their aggregate performances on the clinical services, but the difference following completion of two of three major clinical clerkships has become the distinction between a "slightly above average" level of performance for the regularly admitted students and an "average" level for students admitted on variances.

  14. Research on variance of subnets in network sampling

    Institute of Scientific and Technical Information of China (English)

    Qi Gao; Xiaoting Li; Feng Pan

    2014-01-01

    In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.

  15. Confidence Intervals of Variance Functions in Generalized Linear Model

    Institute of Scientific and Technical Information of China (English)

    Yong Zhou; Dao-ji Li

    2006-01-01

    In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.

  16. Study of bidirectional broadband passive optical network (BPON) using EDFA

    Science.gov (United States)

    Almalaq, Yasser

    Optical line terminals (OLTs) and number of optical network units (ONUs) are two main parts of passive optical network (PON). OLT is placed at the central office of the service providers, the ONUs are located near to the end subscribers. When compared with point-to-point design, a PON decreases the number of fiber used and central office components required. Broadband PON (BPON), which is one type of PON, can support high-speed voice, data and video services to subscribers' residential homes and small businesses. In this research, by using erbium doped fiber amplifier (EDFA), the performance of bi-directional BPON is experimented and tested for both downstream and upstream traffic directions. Ethernet PON (E-PON) and gigabit PON (G-PON) are the two other kinds of passive optical network besides BPON. The most beneficial factor of using BPON is it's reduced cost. The cost of the maintenance between the central office and the users' side is suitable because of the use of passive components, such as a splitter in the BPON architecture. In this work, a bidirectional BPON has been analyzed for both downstream and upstream cases by using bit error rate analyzer (BER). BER analyzers test three factors that are the maximum Q factor, minimum bit error rate, and eye height. In other words, parameters such as maximum Q factor, minimum bit error rate, and eye height can be analyzed utilized a BER tester. Passive optical components such as a splitter, optical circulator, and filters have been used in modeling and simulations. A 12th edition Optiwave simulator has been used in order to analyze the bidirectional BPON system. The system has been tested under several conditions such as changing the fiber length, extinction ratio, dispersion, and coding technique. When a long optical fiber above 40km was used, an EDFA was used in order to improve the quality of the signal.

  17. An Adaptive Antenna Utilizing Minimum Norm and LCMV Algorithms

    Institute of Scientific and Technical Information of China (English)

    M.E.Ahmed; TANZhanzhong

    2005-01-01

    This paper introduces a new structure based on the minimum norm and Linearly constrained minimum variance (LCMV) algorithms to highly suppress the jammers to the Global positioning system (GPS) receiver. Minimum norm can assign a deep null blindly to the jammer direction, also it doesn't introduce any false nulls like other algorithms do. So combining it with LCMV algorithm give a structure capable of adjusting the weights of the antenna array in real time to respond to the signals coming from the desired directions while highly suppresses the jammers coming from the other directions. The simulations were performed for fixed and moving jammers. Two jammers are used one of power -100dBW, the other is-120dBW. The nulls depths attained by Minimum norm alone are 88.4dB for the strong jammer and 45dB for the weak one. The simulation indicates that the new structure can give deeper nulls to the jammers directions, more than 114dB nulls depths for both jammer when they come from fixed directions and about 103dB nulls depths when they come from moving directions. The new structure not only improves the nulls depths but also can control the nulls depths. In addition, it can control the antenna gain in the directions of the useful GPS Signals.

  18. Broadband DOA Estimation Based on Nested Arrays

    Directory of Open Access Journals (Sweden)

    Zhi-bo Shen

    2015-01-01

    Full Text Available Direction of arrival (DOA estimation is a crucial problem in electronic reconnaissance. A novel broadband DOA estimation method utilizing nested arrays is devised in this paper, which is capable of estimating the frequencies and DOAs of multiple narrowband signals in broadbands, even though they may have different carrier frequencies. The proposed method converts the DOA estimation of multiple signals with different frequencies into the spatial frequency estimation. Then, the DOAs and frequencies are pair matched by sparse recovery. It is possible to significantly increase the degrees of freedom (DOF with the nested arrays and the number of sources can be more than that of sensor array. In addition, the method can achieve high estimation precision without the two-dimensional search process in frequency and angle domain. The validity of the proposed method is verified by theoretic analysis and simulation results.

  19. Broadband Visible Light Induced NO Formation

    Science.gov (United States)

    Lubart, Rachel; Eichler, Maor; Friedmann, Harry; Savion, N.; Breitbart, Haim; Ankri, Rinat

    2009-06-01

    Nitric oxide formation is a potential mechanism for photobiomodulation because it is synthesized in cells by nitric oxide synthase (NOS), which contains both flavin and heme, and thus absorbs visible light. The purpose of this work was to study broadband visible light induced NO formation in various cells. Cardiac, endothelial, sperm cells and RAW 264.7 macrophages were illuminated with broadband visible light, 40-130 mW/cm2, 2.4-39 J/cm2, and nitric oxide production was quantified by using the Griess reagent. The results showed that visible light illumination increased NO concentration both in sperm and endothelial cells, but not in cardiac cells. Activation of RAW 264.7 macrophages was very small. It thus appears that NO is involved in photobiomodulation, though different light parameters and illumination protocols are needed to induce NO in various cells.

  20. Inverse Doppler Effects in Broadband Acoustic Metamaterials

    Science.gov (United States)

    Zhai, S. L.; Zhao, X. P.; Liu, S.; Shen, F. L.; Li, L. L.; Luo, C. R.

    2016-08-01

    The Doppler effect refers to the change in frequency of a wave source as a consequence of the relative motion between the source and an observer. Veselago theoretically predicted that materials with negative refractions can induce inverse Doppler effects. With the development of metamaterials, inverse Doppler effects have been extensively investigated. However, the ideal material parameters prescribed by these metamaterial design approaches are complex and also challenging to obtain experimentally. Here, we demonstrated a method of designing and experimentally characterising arbitrary broadband acoustic metamaterials. These omni-directional, double-negative, acoustic metamaterials are constructed with ‘flute-like’ acoustic meta-cluster sets with seven double meta-molecules; these metamaterials also overcome the limitations of broadband negative bulk modulus and mass density to provide a region of negative refraction and inverse Doppler effects. It was also shown that inverse Doppler effects can be detected in a flute, which has been popular for thousands of years in Asia and Europe.

  1. Broad-band acoustic hyperbolic metamaterial

    CERN Document Server

    Shen, Chen; Sui, Ni; Wang, Wenqi; Cummer, Steven A; Jing, Yun

    2015-01-01

    Acoustic metamaterials (AMMs) are engineered materials, made from subwavelength structures, that exhibit useful or unusual constitutive properties. There has been intense research interest in AMMs since its first realization in 2000 by Liu et al. A number of functionalities and applications have been proposed and achieved using AMMs. Hyperbolic metamaterials are one of the most important types of metamaterials due to their extreme anisotropy and numerous possible applications, including negative refraction, backward waves, spatial filtering, and subwavelength imaging. Although the importance of acoustic hyperbolic metamaterials (AHMMs) as a tool for achieving full control of acoustic waves is substantial, the realization of a broad-band and truly hyperbolic AMM has not been reported so far. Here, we demonstrate the design and experimental characterization of a broadband AHMM that operates between 1.0 kHz and 2.5 kHz.

  2. Inverse Doppler Effects in Broadband Acoustic Metamaterials.

    Science.gov (United States)

    Zhai, S L; Zhao, X P; Liu, S; Shen, F L; Li, L L; Luo, C R

    2016-08-31

    The Doppler effect refers to the change in frequency of a wave source as a consequence of the relative motion between the source and an observer. Veselago theoretically predicted that materials with negative refractions can induce inverse Doppler effects. With the development of metamaterials, inverse Doppler effects have been extensively investigated. However, the ideal material parameters prescribed by these metamaterial design approaches are complex and also challenging to obtain experimentally. Here, we demonstrated a method of designing and experimentally characterising arbitrary broadband acoustic metamaterials. These omni-directional, double-negative, acoustic metamaterials are constructed with 'flute-like' acoustic meta-cluster sets with seven double meta-molecules; these metamaterials also overcome the limitations of broadband negative bulk modulus and mass density to provide a region of negative refraction and inverse Doppler effects. It was also shown that inverse Doppler effects can be detected in a flute, which has been popular for thousands of years in Asia and Europe.

  3. Random Lasers for Broadband Directional Emission

    CERN Document Server

    Schönhuber, Sebastian; Hisch, Thomas; Deutsch, Christoph; Krall, Michael; Detz, Hermann; Strasser, Gottfried; Rotter, Stefan; Unterrainer, Karl

    2016-01-01

    Broadband coherent light sources are becoming increasingly important for sensing and spectroscopic applications, especially in the mid-infrared and terahertz (THz) spectral regions, where the unique absorption characteristics of a whole host of molecules are located. The desire to miniaturize such light emitters has recently lead to spectacular advances with compact on-chip lasers that cover both of these spectral regions. The long wavelength and the small size of the sources result in a strongly diverging laser beam that is difficult to focus on the target that one aims to perform spectroscopy with. Here, we introduce an unconventional solution to this vexing problem relying on a random laser to produce coherent broadband THz radiation as well as an almost diffraction limited far-field emission profile. Our random lasers do not require any fine-tuning and thus constitute a promising example of practical device applications for random lasing.

  4. Broadband Phase Spectroscopy over Turbulent Air Paths.

    Science.gov (United States)

    Giorgetta, Fabrizio R; Rieker, Gregory B; Baumann, Esther; Swann, William C; Sinclair, Laura C; Kofler, Jon; Coddington, Ian; Newbury, Nathan R

    2015-09-01

    Broadband atmospheric phase spectra are acquired with a phase-sensitive dual-frequency-comb spectrometer by implementing adaptive compensation for the strong decoherence from atmospheric turbulence. The compensation is possible due to the pistonlike behavior of turbulence across a single spatial-mode path combined with the intrinsic frequency stability and high sampling speed associated with dual-comb spectroscopy. The atmospheric phase spectrum is measured across 2 km of air at each of the 70,000 comb teeth spanning 233  cm(-1) across hundreds of near-infrared rovibrational resonances of CO(2), CH(4), and H(2)O with submilliradian uncertainty, corresponding to a 10(-13) refractive index sensitivity. Trace gas concentrations extracted directly from the phase spectrum reach 0.7 ppm uncertainty, demonstrated here for CO(2). While conventional broadband spectroscopy only measures intensity absorption, this approach enables measurement of the full complex susceptibility even in practical open path sensing.

  5. An ultra-broadband multilayered graphene absorber

    KAUST Repository

    Amin, Muhammad

    2013-01-01

    An ultra-broadband multilayered graphene absorber operating at terahertz (THz) frequencies is proposed. The absorber design makes use of three mechanisms: (i) The graphene layers are asymmetrically patterned to support higher order surface plasmon modes that destructively interfere with the dipolar mode and generate electromagnetically induced absorption. (ii) The patterned graphene layers biased at different gate voltages backedup with dielectric substrates are stacked on top of each other. The resulting absorber is polarization dependent but has an ultra-broadband of operation. (iii) Graphene\\'s damping factor is increased by lowering its electron mobility to 1000cm 2=Vs. Indeed, numerical experiments demonstrate that with only three layers, bandwidth of 90% absorption can be extended upto 7THz, which is drastically larger than only few THz of bandwidth that can be achieved with existing metallic/graphene absorbers. © 2013 Optical Society of America.

  6. A 12 GHz broadband latching circulator

    Science.gov (United States)

    Katoh, Y.; Konishi, H.; Sakamoto, K.

    The two kinds of latching circulators, external return path and internal return path, are defined, noting the advantages (faster switching speed, lower switching energy, less complicated fabrication) offered by the internal configuration. It is noted, however, that this kind of circulator is difficult to make broadband because the return paths do not seem to act as part of the ferrite junction. The development of a 12-GHz broadband, internal return path circulator with impedance matching transformer and in-phase adjustment screws designed using eigenvalue measurement is described. In describing the operating characteristics, it is noted that more than 25 dB isolation over 11 GHz to 13.5 GHz and 0.25 dB insertion loss is obtained.

  7. Utility functions predict variance and skewness risk preferences in monkeys.

    Science.gov (United States)

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  8. Broadband S-band class E HPA

    NARCIS (Netherlands)

    Wanum, van M.; Dijk, van R.; Hek, de A.P.; Vliet, van F.E.

    2009-01-01

    A broadband class E High Power Amplifier (HPA) is presented. This HPA is designed to operate at S-band (2.75 to 3.75 GHz). A power added efficiency of 50% is obtained for the two stage amplifier with an output power of 35.5 dBm on a chip area of 5.25 times 2.8 mm2.

  9. Broadband S-band Class E HPA

    NARCIS (Netherlands)

    Wanum, M. van; Dijk, R. van; Hek, A.P. de; Vliet, F.E. van

    2009-01-01

    A broadband class E High Power Amplifier (HPA) is presented. This HPA is designed to operate at S-band (2.75 to 3.75 GHz). A power added efficiency of 50% is obtained for the two stage amplifier with an output power of 35.5 dBm on a chip area of 5.25 × 2.8 mm2.

  10. Broadband luminescence in liquid-solid transition

    CERN Document Server

    Achilov, M F; Trunilina, O V

    2002-01-01

    Broadband luminescence (BBL) intensity behavior in liquid-solid transition in polyethyleneglycol-600 has been established. Oscillation of BBL intensity observed in liquid-polycrystal transition are not found to observed in liquid-amorphous solid transition. It is shown that application of the theory of electron state tails to interpretation of BBL spectral properties in liquids demands restriction. BBL spectroscopy may be applied for optimization of preparation of polymers with determined properties. (author)

  11. Enhanced broadband optical transmission in metallized woodpiles

    DEFF Research Database (Denmark)

    Malureanu, Radu; Alabastri, A.; Cheng, W.;

    2011-01-01

    We present an optimized isotropic metal deposition technique used for covering three-dimensional polymer structures with a 50 nm smooth silver layer. The technology allows fast and isotropic coating of complex 3D dielectric structures with thin silver layers. Transmission measurements of 3D...... metallized woodpiles reveal a new phenomenon of enhanced optical transmission in broadband range (up to 300 nm) in the near IR....

  12. Metamaterial Coatings for Broadband Asymmetric Mirrors

    CERN Document Server

    Chen, A; Hasegawa, K; Podolskiy, V A; Chen, Aiqing; Deutsch, Miriam; Hasegawa, Keisuke; Podolskiy, Viktor A.

    2006-01-01

    We report on design and fabrication of nano-composite metal-dielectric thin film coatings with high reflectance asymmetries. Applying basic dispersion engineering principles to model a broadband and large reflectance asymmetry, we obtain a model dielectric function for the metamaterial film, closely resembling the effective permittivity of disordered metal-dielectric nano-composites. Coatings realized using disordered nanocrystalline silver films deposited on glass substrates confirm the theoretical predictions, exhibiting symmetric transmittance, large reflectance asymmetries and a unique flat reflectance asymmetry.

  13. Broadband Spectroscopy of Nanoporous-Gold Promoter

    Directory of Open Access Journals (Sweden)

    S. K. Nakatani

    2014-02-01

    Full Text Available The efficiency of UV photocatalysis on TiO2 particles was increased by mixing TiO2 particles with nanoporous gold (NPG with pore diameters of 10–40 nm. This means that NPG acts as a promoter in the photocatalytic reaction of TiO2. Broadband spectroscopic results from millimeter wave to ultra violet of NPG membrane are discussed to estimate plasmonic effect on the catalysis.

  14. Diagonalizing sensing matrix of broadband RSE

    Science.gov (United States)

    Sato, Shuichi; Kokeyama, Keiko; Kawazoe, Fumiko; Somiya, Kentaro; Kawamura, Seiji

    2006-03-01

    For a broadband-operated RSE interferometer, a simple and smart length sensing and control scheme was newly proposed. The sensing matrix could be diagonal, owing to a simple allocation of two RF modulations and to a macroscopic displacement of cavity mirrors, which cause a detuning of the RF modulation sidebands. In this article, the idea of the sensing scheme and an optimization of the relevant parameters will be described.

  15. Energy efficient evolution of mobile broadband networks

    Energy Technology Data Exchange (ETDEWEB)

    Micallef, G.

    2013-04-15

    existing macro base station sites together with the deployment of outdoor or indoor small cells (heterogeneous network) provide the best compromise between performance and power consumption. Focusing on one of the case studies, it is noted that the upgrade of both HSPA and LTE network layers results in the power consumption of the network increasing by a factor of 4. When coupled with the growth in capacity introduced by the various upgrades (x50), the efficiency of the network is still greatly improved. Over the evolution period, the stated increase in power consumption does not consider improvement in base station equipment. By considering a number of different equipment versions, the evolution study is further extended to also include the impact of replacing old equipment. Results show that an aggressive replacement strategy and the upgrade of sites to remote radio head can restrain the increase in power consumption of the network to just 17%. In addition to upgrading equipment, mobile network operators can further reduce power consumption by enabling a number of power saving features. These features often exploit redundancies within the network and/or the variation in traffic over a daily period. An example of such feature is sleep mode, which allows for base station sites to be systematically powered down during hours with low network traffic. While dependent on the traffic profile, within an urban area sleep mode can reduce the daily energy consumption of the network by around 20%. In addition to the different variances of sleep mode, the potential savings of other features are also described. Selecting a power efficient network capacity evolution path, replacing old and less efficient equipment, and enabling power saving features, can all considerably reduce the power consumption of future mobile broadband networks. Studies and recommendations presented within this thesis demonstrate that it is realistic for mobile network operators to boost network capacity by a

  16. Minimum aanlandingsmaat Brasem (Abramis brama)

    NARCIS (Netherlands)

    Hal, van R.; Miller, D.C.M.

    2016-01-01

    Ter ondersteuning van een besluit aangaande een minimum aanlandingsmaat voor brasem, primair voor het IJsselmeer en Markermeer, heeft het ministerie van Economische Zaken IMARES verzocht een overzicht te geven van aanlandingsmaten voor brasem in andere landen en waar mogelijk de motivatie achter dez

  17. Minimum Thermal Conductivity of Superlattices

    Energy Technology Data Exchange (ETDEWEB)

    Simkin, M. V.; Mahan, G. D.

    2000-01-31

    The phonon thermal conductivity of a multilayer is calculated for transport perpendicular to the layers. There is a crossover between particle transport for thick layers to wave transport for thin layers. The calculations show that the conductivity has a minimum value for a layer thickness somewhat smaller then the mean free path of the phonons. (c) 2000 The American Physical Society.

  18. Coupling between minimum scattering antennas

    DEFF Research Database (Denmark)

    Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans

    1974-01-01

    Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...

  19. Design and fabrication of broadband rugate filter

    Institute of Scientific and Technical Information of China (English)

    Zhaug Jun-Chao; Fang Ming; Shao Yu-Chuan; Jin Yun-Xia; He Hong-Bo

    2012-01-01

    The design and the deposition of a rugate filter for broadband applications are discussed.The bandwidth is extended by increasing the rugate period continuously with depth.The width and the smoothness of the reflection band with the distribution of the periods are investigated.The improvement of the steepness of the stopband edges and the suppression of the side lobes in the transmission zone are realized by adding two apodized rugate structures with fixed periods at the external broadband rngate filter interfaces.The rapidly alternating deposition technology is used to fabricate a rugate filter sample.The measured transmission spectrum with a reflection bandwidth of approximately 505 nm is close to that of the designed broadband rugate filter except a transmittance peak in the stopband.Based on the analysis of the cross-sectional scanning electron microscopic image of the sample,it is found that the transmission peak is most likely to be caused by the instability of the deposition rate.

  20. Broadband electromagnetic analysis of compacted kaolin

    Science.gov (United States)

    Bore, Thierry; Wagner, Norman; Cai, Caifang; Scheuermann, Alexander

    2017-01-01

    The mechanical compaction of soil influences not only the mechanical strength and compressibility but also the hydraulic behavior in terms of hydraulic conductivity and soil suction. At the same time, electric and dielectric parameters are increasingly used to characterize soil and to relate them with mechanic and hydraulic parameters. In the presented study electromagnetic soil properties and suction were measured under defined conditions of standardized compaction tests. The impact of external mechanical stress conditions of nearly pure kaolinite was analyzed on soil suction and broadband electromagnetic soil properties. An experimental procedure was developed and validated to simultaneously determine mechanical, hydraulic and broadband (1 MHz-3 GHz) electromagnetic properties of the porous material. The frequency dependent electromagnetic properties were modeled with a classical mixture equation (advanced Lichtenecker and Rother model, ALRM) and a hydraulic-mechanical-electromagnetic coupling approach was introduced considering water saturation, soil structure (bulk density, porosity), soil suction (pore size distribution, water sorption) as well as electrical conductivity of the aqueous pore solution. Moreover, the relaxation behavior was analyzed with a generalized fractional relaxation model concerning a high-frequency water process and two interface processes extended with an apparent direct current conductivity contribution. The different modeling approaches provide a satisfactory agreement with experimental data for the real part. These results show the potential of broadband electromagnetic approaches for quantitative estimation of the hydraulic state of the soil during densification.

  1. Pacific Array (Transportable Broadband Ocean Floor Array)

    Science.gov (United States)

    Kawakatsu, Hitoshi; Ekstrom, Goran; Evans, Rob; Forsyth, Don; Gaherty, Jim; Kennett, Brian; Montagner, Jean-Paul; Utada, Hisashi

    2016-04-01

    Based on recent developments on broadband ocean bottom seismometry, we propose a next generation large-scale array experiment in the ocean. Recent advances in ocean bottom broadband seismometry1, together with advances in the seismic analysis methodology, have enabled us to resolve the regional 1-D structure of the entire lithosphere/asthenosphere system, including seismic anisotropy (azimuthal, and hopefully radial), with deployments of ~15 broadband ocean bottom seismometers (BBOBSs). Having ~15 BBOBSs as an array unit for a 2-year deployment, and repeating such deployments in a leap-frog way or concurrently (an array of arrays) for a decade or so would enable us to cover a large portion of the Pacific basin. Such efforts, not only by giving regional constraints on the 1-D structure beneath Pacific ocean, but also by sharing waveform data for global scale waveform tomography, would drastically increase our knowledge of how plate tectonics works on this planet, as well as how it worked for the past 150 million years. International collaborations is essential: if three countries/institutions participate this endeavor together, Pacific Array may be accomplished within five-or-so years.

  2. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance

    NARCIS (Netherlands)

    Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.

    2009-01-01

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam

  3. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...

  4. Filtered kriging for spatial data with heterogeneous measurement error variances.

    Science.gov (United States)

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.

  5. Meta-analysis of ratios of sample variances.

    Science.gov (United States)

    Prendergast, Luke A; Staudte, Robert G

    2016-05-20

    When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.

  6. Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model

    OpenAIRE

    Leunglung Chan; Eckhard Platen

    2015-01-01

    This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.

  7. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  8. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  9. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A

    2007-01-01

    OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...

  10. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  11. Quantum mechanics the theoretical minimum

    CERN Document Server

    Susskind, Leonard

    2014-01-01

    From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.

  12. Incoherent broadband optical pulse generation using an optical gate

    Institute of Scientific and Technical Information of China (English)

    Biao Chen; Qiong Jiang

    2008-01-01

    In two-dimensional (2D) time-spreading/wavelength-hopping optical code division multiple access (OCDMA) systems, employing less coherent broadband optical pulse sources allows lower electrical operating rate and better system performance. An optical gate based scheme for generating weakly coherent(approximately incoherent) broadband optical pulses was proposed and experimentally demonstrated. Inthis scheme, the terahertz optical asymmetric demultiplexer, together with a coherent narrowband controlpulse source, turns an incoherent broadband continuous-wave (CW) light source into the required pulse source.

  13. Broadband tonpilz underwater acoustic transducers based on multimode optimization

    OpenAIRE

    Yao, Qingshan; Jensen, Leif Bjørnø

    1997-01-01

    Head flapping has often been considered to be deleterious for obtaining a tonpilz transducer with broadband, high power performance. In the present work, broadband, high power tonpilz transducers have been designed using the finite element (FE) method. Optimized vibrational modes including the flapping mode of the head are effectively used to achieve the broadband performance. The behavior of the transducer in its longitudinal piston mode and in its flapping mode is analysed for in-air and in...

  14. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    Science.gov (United States)

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  15. Minimum thickness anterior porcelain restorations.

    Science.gov (United States)

    Radz, Gary M

    2011-04-01

    Porcelain laminate veneers (PLVs) provide the dentist and the patient with an opportunity to enhance the patient's smile in a minimally to virtually noninvasive manner. Today's PLV demonstrates excellent clinical performance and as materials and techniques have evolved, the PLV has become one of the most predictable, most esthetic, and least invasive modalities of treatment. This article explores the latest porcelain materials and their use in minimum thickness restoration.

  16. Fingerprinting with Minimum Distance Decoding

    CERN Document Server

    Lin, Shih-Chun; Gamal, Hesham El

    2007-01-01

    This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...

  17. Minimum feature size preserving decompositions

    CERN Document Server

    Aloupis, Greg; Demaine, Martin L; Dujmovic, Vida; Iacono, John

    2009-01-01

    The minimum feature size of a crossing-free straight line drawing is the minimum distance between a vertex and a non-incident edge. This quantity measures the resolution needed to display a figure or the tool size needed to mill the figure. The spread is the ratio of the diameter to the minimum feature size. While many algorithms (particularly in meshing) depend on the spread of the input, none explicitly consider finding a mesh whose spread is similar to the input. When a polygon is partitioned into smaller regions, such as triangles or quadrangles, the degradation is the ratio of original to final spread (the final spread is always greater). Here we present an algorithm to quadrangulate a simple n-gon, while achieving constant degradation. Note that although all faces have a quadrangular shape, the number of edges bounding each face may be larger. This method uses Theta(n) Steiner points and produces Theta(n) quadrangles. In fact to obtain constant degradation, Omega(n) Steiner points are required by any al...

  18. Broadband Mid-Infrared Comb-Resolved Fourier Transform Spectroscopy

    Science.gov (United States)

    Lee, Kevin; Mills, Andrew; Mohr, Christian; Jiang, Jie; Fermann, Martin; Maslowski, Piotr

    2014-06-01

    We report on a comb-resolved, broadband, direct-comb spectroscopy system in the mid-IR and its application to the detection of trace gases and molecular line shape analysis. By coupling an optical parametric oscillator (OPO), a 100 m multipass cell, and a high-resolution Fourier transform spectrometer (FTS), sensitive, comb-resolved broadband spectroscopy of dilute gases is possible. The OPO has radiation output at 3.1-3.7 and 4.5-5.5 μm. The laser repetition rate is scanned to arbitrary values with 1 Hz accuracy around 417 MHz. The comb-resolved spectrum is produced with an absolute frequency axis depending only on the RF reference (in this case a GPS disciplined oscillator), stable to 1 part in 10^9. The minimum detectable absorption is 1.6x10-6 wn Hz-1/2. The operating range of the experimental setup enables access to strong fundamental transitions of numerous molecular species for applications based on trace gas detection such as environmental monitoring, industrial gas calibration or medical application of human breath analysis. In addition to these capabilities, we show the application for careful line shape analysis of argon-broadened CO band spectra around 4.7 μm. Fits of the obtained spectra clearly illustrate the discrepancy between the measured spectra and the Voigt profile (VP), indicating the need to include effects such as Dicke narrowing and the speed-dependence of the collisional width and shift in the line shape model, as was shown in previous cw-laser studies. In contrast to cw-laser based experiments, in this case the entire spectrum (˜ 250 wn) covering the whole P and R branches can be measured in 16 s with 417 MHz resolution, decreasing the acquisition time by orders of magnitude. The parallel acquisition allows collection of multiple lines simultaneously, removing the correlation of possible temperature and pressure drifts. While cw-systems are capable of measuring spectra with higher precision, this demonstration opens the door for fast

  19. Detecting Pulsars with Interstellar Scintillation in Variance Images

    CERN Document Server

    Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E

    2016-01-01

    Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...

  20. Variance estimation in neutron coincidence counting using the bootstrap method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)

    2015-09-11

    In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.

  1. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.

  2. Wavelet Variance Analysis of EEG Based on Window Function

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-zhuang; YOU Rong-yi

    2014-01-01

    A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

  3. Multiperiod mean-variance efficient portfolios with endogenous liabilities

    OpenAIRE

    Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo

    2011-01-01

    We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...

  4. Testing for Causality in Variance Usinf Multivariate GARCH Models

    OpenAIRE

    Christian M. Hafner; Herwartz, Helmut

    2008-01-01

    Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...

  5. Testing for causality in variance using multivariate GARCH models

    OpenAIRE

    Hafner, Christian; Herwartz, H.

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...

  6. Dimension free and infinite variance tail estimates on Poisson space

    OpenAIRE

    Breton, J. C.; Houdré, C.; Privault, N.

    2004-01-01

    Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...

  7. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  8. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  9. Investigation of broadband digital predistortion for broadband radio over fiber transmission systems

    Science.gov (United States)

    Zhang, Xiupu; Liu, Taijun; Shen, Dongya

    2016-12-01

    In future broadband cloud radio access networks (C-RAN), front-haul transmission systems play a significant role in performance and cost of C-RAN. Broadband and high linearity radio over fiber (RoF) transmission systems are considered a promising solution for the front-haul. Digital linearization is one possible solution for RoF front-haul. In this paper, we investigate RF domain digital predistortion (DPD) linearization for broadband RoF front-haul. The implemented DPD is first investigated in 2.4 GHz WiFi over fiber transmission systems at 36 Mb/s, and more than 8-dB and 5.6-dB improvements of error vector magnitude (EVM) are achieved in back to back (BTB) and after 10 km single mode fiber (SMF) transmission. Further, both WiFi and ultra wide band (UWB) wireless signals are transmitted together, in which the DPD has linearization bandwidth of 2.4 GHz. It is shown that the implemented DPD leads to EVM improvements of 4.5-dB (BTB) and 3.1-dB (10 km SMF) for the WiFi signal, and 4.6-dB (BTB) and 4-dB (10 km SMF) for the broadband UWB signal.

  10. A novel broadband waterborne acoustic absorber

    Science.gov (United States)

    Wang, Changxian; Wen, Weibin; Huang, Yixing; Chen, Mingji; Lei, Hongshuai; Fang, Daining

    2016-07-01

    In this paper, we extended the ray tracing theory in polar coordinate system, and originally proposed the Snell-Descartes law in polar coordinates. Based on these theories, a novel broadband waterborne acoustic absorber device was proposed. This device is designed with gradient-distributing materials along radius, which makes the incidence acoustic wave ray warps. The echo reduction effects of this device were investigated by finite element analysis, and the numerical results show that the reflectivity of acoustic wave for the new device is lower than that of homogenous and Alberich layers in almost all frequency 0-30 kHz at the same loss factor.

  11. Source of broadband Jovian Kilometric radiation

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.; Leblanc, Y.

    1987-02-01

    Broadband Jovian Kilometric radiation was observed by Voyagers 1 and 2 to be beamed away from the zenomagnetic equatorial plane. Two theories were proposed for the equatorial shadow zone. One suggested that Io plasma torus forms an obstacle to radiation produced on auroral field lines. The other theory proposed that the source is located on the outer flanks of the torus, the beaming being inherent to the emission mechanism. Results are presented which indicate that the latter is consistent with the observations and it would appear that the emission is produced by linear mode conversion of electrostatic upper hybrid to electromagnetic waves in plasma density gradients.

  12. Optical broadband monitoring of thin film growth

    Institute of Scientific and Technical Information of China (English)

    H.Ehlers; T.Groβ; M.Lappschies; D.Ristau

    2005-01-01

    This contribution is focused on applications of spectroscopic methods for the precise control of deposition processes. In this context, the present study gives a review on selected combinations of conventional and ion deposition techniques with different broadband online spectrophotometric systems. Besides two systems operating in the VIS- and NIR-spectral range in combination with ion processes, also a monochromator system developed for conventional deposition processes in the DUV/VUV-spectral range will be discussed. The considerations will be concluded by a comparison of the major advantages of the specific combinations of processes with online monitoring concepts and by a brief outlook concerning future challenges.

  13. Fibre laser based broadband THz imaging systems

    DEFF Research Database (Denmark)

    Eichhorn, Finn

    State-of-the-art optical fiber technology can contribute towards complex multi-element broadband terahertz imaging systems. Classical table-top terahertz imaging systems are generally limited to a single emitter/receiver pair, which constrains their imaging capability to tedious raster scanning...... imaging techniques. This thesis exhibits that fiber technology can improve the robustness and the flexibility of terahertz imaging systems both by the use of fiber-optic light sources and the employment of optical fibers as light distribution medium. The main focus is placed on multi-element terahertz...

  14. Parallel local search for solving Constraint Problems on the Cell Broadband Engine (Preliminary Results)

    CERN Document Server

    Abreu, Salvator; Codognet, Philippe

    2009-01-01

    We explore the use of the Cell Broadband Engine (Cell/BE for short) for combinatorial optimization applications: we present a parallel version of a constraint-based local search algorithm that has been implemented on a multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16 SPUs per blade). This algorithm was chosen because it fits very well the Cell/BE architecture and requires neither shared memory nor communication between processors, while retaining a compact memory footprint. We study the performance on several large optimization benchmarks and show that this achieves mostly linear time speedups, even sometimes super-linear. This is possible because the parallel implementation might explore simultaneously different parts of the search space and therefore converge faster towards the best sub-space and thus towards a solution. Besides getting speedups, the resulting times exhibit a much smaller variance, which benefits applications where a timely reply is critical.

  15. Parallel local search for solving Constraint Problems on the Cell Broadband Engine (Preliminary Results

    Directory of Open Access Journals (Sweden)

    Salvator Abreu

    2009-10-01

    Full Text Available We explore the use of the Cell Broadband Engine (Cell/BE for short for combinatorial optimization applications: we present a parallel version of a constraint-based local search algorithm that has been implemented on a multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16 SPUs per blade. This algorithm was chosen because it fits very well the Cell/BE architecture and requires neither shared memory nor communication between processors, while retaining a compact memory footprint. We study the performance on several large optimization benchmarks and show that this achieves mostly linear time speedups, even sometimes super-linear. This is possible because the parallel implementation might explore simultaneously different parts of the search space and therefore converge faster towards the best sub-space and thus towards a solution. Besides getting speedups, the resulting times exhibit a much smaller variance, which benefits applications where a timely reply is critical.

  16. Ceramic veneers with minimum preparation.

    Science.gov (United States)

    da Cunha, Leonardo Fernandes; Reis, Rachelle; Santana, Lino; Romanini, Jose Carlos; Carvalho, Ricardo Marins; Furuse, Adilson Yoshio

    2013-10-01

    The aim of this article is to describe the possibility of improving dental esthetics with low-thickness glass ceramics without major tooth preparation for patients with small to moderate anterior dental wear and little discoloration. For this purpose, a carefully defined treatment planning and a good communication between the clinician and the dental technician helped to maximize enamel preservation, and offered a good treatment option. Moreover, besides restoring esthetics, the restorative treatment also improved the function of the anterior guidance. It can be concluded that the conservative use of minimum thickness ceramic laminate veneers may provide satisfactory esthetic outcomes while preserving the dental structure.

  17. Broadband acoustic properties of a murine skull.

    Science.gov (United States)

    Estrada, Héctor; Rebling, Johannes; Turner, Jake; Razansky, Daniel

    2016-03-07

    It has been well recognized that the presence of a skull imposes harsh restrictions on the use of ultrasound and optoacoustic techniques in the study, treatment and modulation of the brain function. We propose a rigorous modeling and experimental methodology for estimating the insertion loss and the elastic constants of the skull over a wide range of frequencies and incidence angles. A point-source-like excitation of ultrawideband acoustic radiation was induced via the absorption of nanosecond duration laser pulses by a 20 μm diameter microsphere. The acoustic waves transmitted through the skull are recorded by a broadband, spherically focused ultrasound transducer. A coregistered pulse-echo ultrasound scan is subsequently performed to provide accurate skull geometry to be fed into an acoustic transmission model represented in an angular spectrum domain. The modeling predictions were validated by measurements taken from a glass cover-slip and ex vivo adult mouse skulls. The flexible semi-analytical formulation of the model allows for seamless extension to other transducer geometries and diverse experimental scenarios involving broadband acoustic transmission through locally flat solid structures. It is anticipated that accurate quantification and modeling of the skull transmission effects would ultimately allow for skull aberration correction in a broad variety of applications employing transcranial detection or transmission of high frequency ultrasound.

  18. Broadband surface-wave transformation cloak

    Science.gov (United States)

    Xu, Su; Xu, Hongyi; Gao, Hanhong; Jiang, Yuyu; Yu, Faxin; Joannopoulos, John D.; Soljačić, Marin; Chen, Hongsheng; Sun, Handong; Zhang, Baile

    2015-01-01

    Guiding surface electromagnetic waves around disorder without disturbing the wave amplitude or phase is in great demand for modern photonic and plasmonic devices, but is fundamentally difficult to realize because light momentum must be conserved in a scattering event. A partial realization has been achieved by exploiting topological electromagnetic surface states, but this approach is limited to narrow-band light transmission and subject to phase disturbances in the presence of disorder. Recent advances in transformation optics apply principles of general relativity to curve the space for light, allowing one to match the momentum and phase of light around any disorder as if that disorder were not there. This feature has been exploited in the development of invisibility cloaks. An ideal invisibility cloak, however, would require the phase velocity of light being guided around the cloaked object to exceed the vacuum speed of light—a feat potentially achievable only over an extremely narrow band. In this work, we theoretically and experimentally show that the bottlenecks encountered in previous studies can be overcome. We introduce a class of cloaks capable of remarkable broadband surface electromagnetic waves guidance around ultrasharp corners and bumps with no perceptible changes in amplitude and phase. These cloaks consist of specifically designed nonmagnetic metamaterials and achieve nearly ideal transmission efficiency over a broadband frequency range from 0+ to 6 GHz. This work provides strong support for the application of transformation optics to plasmonic circuits and could pave the way toward high-performance, large-scale integrated photonic circuits. PMID:26056299

  19. A Design of Double Broadband MIMO Antenna

    Directory of Open Access Journals (Sweden)

    Yanfeng Geng

    2015-01-01

    Full Text Available The MIMO antenna applied to LTE mobile system should be miniaturization and can work in the current communication frequency band; isolation between each antenna unit also should be good so as to reduce loss of radio wave energy and improve the antenna performance of the MIMO system. This paper puts forward the design scheme of a broadband MIMO double antenna. And the design of antenna unit and debugging and related technical measures, such as bending antenna bracket, are both presented; the integration design of high isolation of ultra broadband MIMO antenna is realized on the plate with the volume of 100 × 52 × 0.8 mm3; antenna working bands are 698 MHz~960 MHz and 1710 MHz~2700 MHz; in the whole spectrum, the 10 dB of port isolation can be basically achieved; in low frequency band, the isolation degree of antenna port can reach 12 dB.

  20. Evaluation of arctic broadband surface radiation measurements

    Directory of Open Access Journals (Sweden)

    N. Matsui

    2011-08-01

    Full Text Available The Arctic is a challenging environment for making in-situ radiation measurements. A standard suite of radiation sensors is typically designed to measure the total, direct and diffuse components of incoming and outgoing broadband shortwave (SW and broadband thermal infrared, or longwave (LW radiation. Enhancements can include various sensors for measuring irradiance in various narrower bandwidths. Many solar radiation/thermal infrared flux sensors utilize protective glass domes and some are mounted on complex mechanical platforms (solar trackers that rotate sensors and shading devices that track the sun. High quality measurements require striking a balance between locating sensors in a pristine undisturbed location free of artificial blockage (such as buildings and towers and providing accessibility to allow operators to clean and maintain the instruments. Three significant sources of erroneous data include solar tracker malfunctions, rime/frost/snow deposition on the instruments and operational problems due to limited operator access in extreme weather conditions. In this study, a comparison is made between the global and component sum (direct [vertical component] + diffuse shortwave measurements. The difference between these two quantities (that theoretically should be zero is used to illustrate the magnitude and seasonality of radiation flux measurement problems. The problem of rime/frost/snow deposition is investigated in more detail for one case study utilizing both shortwave and longwave measurements. Solutions to these operational problems are proposed that utilize measurement redundancy, more sophisticated heating and ventilation strategies and a more systematic program of operational support and subsequent data quality protocols.

  1. Broadband matched-field inversion for shallow water environment parameters

    Institute of Scientific and Technical Information of China (English)

    YANG Kunde; MA Yuanliang

    2003-01-01

    In this paper, broadband multi-frequencies matched-field inversion method is used to determine the environmental parameters in shallow water. According to different conditions, several broadband objective functions are presented. Using ASIAEX2001 experiment data and genetic algorithms, environmental parameters are obtained, especially in sediment.

  2. Broadband Liner Optimization for the Source Diagnostic Test Fan

    Science.gov (United States)

    Nark, Douglas M.; Jones, Michael G.

    2012-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more appealing. This paper describes a broadband acoustic liner optimization study for the scale model Source Diagnostic Test fan. Specifically, in-duct attenuation predictions with a statistical fan source model are used to obtain optimum impedance spectra over a number of flow conditions for three liner locations in the bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Typical tonal liner designs targeting single frequencies at one operating condition are first produced to provide baseline performance information. These are followed by multiple broadband design approaches culminating in a broadband liner targeting the full range of frequencies and operating conditions. The broadband liner is found to satisfy the optimum impedance objectives much better than the tonal liner designs. In addition, the broadband liner is found to provide better attenuation than the tonal designs over the full range of frequencies and operating conditions considered. Thus, the current study successfully establishes a process for the initial design and evaluation of novel broadband liner concepts for complex engine configurations.

  3. The role of public initiatives facilitating investments in broadband infrastructures

    DEFF Research Database (Denmark)

    Falch, Morten; Tadayoni, Reza; Henten, Anders

    2015-01-01

    This paper discusses the role of a developmental approach to broadband policy. The policy approaches made in Denmark and Sweden are compared, and the scope for public intervention at the broadband market is discussed. The paper includes a case study on public intervention in the rural areas...

  4. Silicon graphene waveguide tunable broadband microwave photonics phase shifter

    CERN Document Server

    Capmany, Jose; Muñoz, Pascual

    2013-01-01

    We propose the use of silicon graphene waveguides to implement a tunable broadband microwave photonics phase shifte based on integrated ring cavities. Numerical computation results show the feasibility for broadband operation over 40 GHz bandwidth and full 360 degree radiofrequency phase-shift with a modest voltage excursion of 0.12 volt.

  5. Techno-economic evaluation of broadband access technologies

    DEFF Research Database (Denmark)

    Sigurdsson, Halldór Matthias; Skouby, Knud Erik

    2005-01-01

    Broadband for all is an essential element in the EU policy concerning the future of ICT-based society. The overall purpose of this paper is to present a model for evaluation of different broadband access technologies and to present some preliminary results based on the model that has been carried...

  6. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    Science.gov (United States)

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  7. Municipal Broadband in Wilson, North Carolina: A Study

    Science.gov (United States)

    O'Boyle, Timothy

    2012-01-01

    Relatively little empirical attention has been paid to the political economy of publicly-retailed fiber-optic broadband internet service. To address this gap in the literature, this dissertation examines the history, dynamics and trends in the municipal broadband movement. In specific, Wilson, North Carolina's Greenlight service is examined in…

  8. OFDM Towards Fixed and Mobile Broadband Wireless Access

    DEFF Research Database (Denmark)

    Shanker Jha, Uma; Prasad, Ramjee

    of mobile broadband wireless access and the standards developed by the IEEE 802.16 standards organization. The book gives practitioners a solid understaning of: Basic requirements of fixed and mobile broadband access technologies. Fundamentals of orthogonal frequency division multiplexing (OFDM...

  9. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    Science.gov (United States)

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  10. Variance-based fingerprint distance adjustment algorithm for indoor localization

    Institute of Scientific and Technical Information of China (English)

    Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang

    2015-01-01

    The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.

  11. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    Science.gov (United States)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  12. Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data

    Science.gov (United States)

    Das, S. B.; Mitra, S.

    2015-12-01

    We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.

  13. Robust Sequential Covariance Intersection Fusion Kalman Filtering over Multi-agent Sensor Networks with Measurement Delays and Uncertain Noise Variances

    Institute of Scientific and Technical Information of China (English)

    QI Wen-Juan; ZHANG Peng; DENG Zi-Li

    2014-01-01

    This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.

  14. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  15. CMB-S4 and the Hemispherical Variance Anomaly

    CERN Document Server

    O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D

    2016-01-01

    Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...

  16. Saturation of number variance in embedded random-matrix ensembles

    Science.gov (United States)

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.

  17. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  18. The positioning algorithm based on feature variance of billet character

    Science.gov (United States)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  19. Saturation of number variance in embedded random-matrix ensembles.

    Science.gov (United States)

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.

  20. Expectation Values and Variance Based on Lp-Norms

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2012-11-01

    Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.

  1. Asymmetric k-Center with Minimum Coverage

    DEFF Research Database (Denmark)

    Gørtz, Inge Li

    2008-01-01

    In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....

  2. Mammoth Mountain, California broadband seismic experiment

    Science.gov (United States)

    Dawson, P. B.; Pitt, A. M.; Wilkinson, S. K.; Chouet, B. A.; Hill, D. P.; Mangan, M.; Prejean, S. G.; Read, C.; Shelly, D. R.

    2013-12-01

    Mammoth Mountain is a young cumulo-volcano located on the southwest rim of Long Valley caldera, California. Current volcanic processes beneath Mammoth Mountain are manifested in a wide range of seismic signals, including swarms of shallow volcano-tectonic earthquakes, upper and mid-crustal long-period earthquakes, swarms of brittle-failure earthquakes in the lower crust, and shallow (3-km depth) very-long-period earthquakes. Diffuse emissions of C02 began after a magmatic dike injection beneath the volcano in 1989, and continue to present time. These indications of volcanic unrest drive an extensive monitoring effort of the volcano by the USGS Volcano Hazards Program. As part of this effort, eleven broadband seismometers were deployed on Mammoth Mountain in November 2011. This temporary deployment is expected to run through the fall of 2013. These stations supplement the local short-period and broadband seismic stations of the Northern California Seismic Network (NCSN) and provide a combined network of eighteen broadband stations operating within 4 km of the summit of Mammoth Mountain. Data from the temporary stations are not available in real-time, requiring the merging of the data from the temporary and permanent networks, timing of phases, and relocation of seismic events to be accomplished outside of the standard NCSN processing scheme. The timing of phases is accomplished through an interactive Java-based phase-picking routine, and the relocation of seismicity is achieved using the probabilistic non-linear software package NonLinLoc, distributed under the GNU General Public License by Alomax Scientific. Several swarms of shallow volcano-tectonic earthquakes, spasmodic bursts of high-frequency earthquakes, a few long-period events located within or below the edifice of Mammoth Mountain and numerous mid-crustal long-period events have been recorded by the network. To date, about 900 of the ~2400 events occurring beneath Mammoth Mountain since November 2011 have

  3. Minimum Competency Testing and the Handicapped.

    Science.gov (United States)

    Wildemuth, Barbara M.

    This brief overview of minimum competency testing and disabled high school students discusses: the inclusion or exclusion of handicapped students in minimum competency testing programs; approaches to accommodating the individual needs of handicapped students; and legal issues. Surveys of states that have mandated minimum competency tests indicate…

  4. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its effect

  5. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  6. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  7. Recursive identification for multidimensional ARMA processes with increasing variances

    Institute of Scientific and Technical Information of China (English)

    CHEN Hanfu

    2005-01-01

    In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β> 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.

  8. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  9. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  10. On Variance and Covariance for Bounded Linear Operators

    Institute of Scientific and Technical Information of China (English)

    Chia Shiang LIN

    2001-01-01

    In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.

  11. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    Science.gov (United States)

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.

  12. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  13. Broadband Optical Access Technologies to Converge towards a Broadband Society in Europe

    Science.gov (United States)

    Coudreuse, Jean-Pierre; Pautonnier, Sophie; Lavillonnière, Eric; Didierjean, Sylvain; Hilt, Benoît; Kida, Toshimichi; Oshima, Kazuyoshi

    This paper provides insights on the status of broadband optical access market and technologies in Europe and on the expected trends for the next generation optical access networks. The final target for most operators, cities or any other player is of course FTTH (Fibre To The Home) deployment although we can expect intermediate steps with copper or wireless technologies. Among the two candidate architectures for FTTH, PON (Passive Optical Network) is by far the most attractive and cost effective solution. We also demonstrate that Ethernet based optical access network is very adequate to all-IP networks without any incidence on the level of quality of service. Finally, we provide feedback from a FTTH pilot network in Colmar (France) based on Gigabit Ethernet PON technology. The interest of this pilot lies on the level of functionality required for broadband optical access networks but also on the development of new home network configurations.

  14. Modeling the broadband persistent emission of magnetars

    CERN Document Server

    Zane, Silvia; Nobili, Luciano; Rea, Nanda

    2010-01-01

    In this paper, we discuss our first attempts to model the broadband persistent emission of magnetars within a self consistent, physical scenario. We present the predictions of a synthetic model that we calculated with a new Monte Carlo 3-D radiative code. The basic idea is that soft thermal photons (e.g. emitted by the star surface) can experience resonant cyclotron upscattering by a population of relativistic electrons threated in the twisted magnetosphere. Our code is specifically tailored to work in the ultra-magnetized regime; polarization and QED effects are consistently accounted for, as well different configurations for the magnetosphere. We discuss the predicted spectral properties in the 0.1-1000 keV range, the polarization properties, and we present the model application to a sample of magnetars soft X-ray spectra.

  15. Resource Management in Broadband Communication Networks

    DEFF Research Database (Denmark)

    Hansen, Mads Stenhuus

    2003-01-01

    This thesis - Resource Management in Broadband Communication Networks - deals with different ways of optimizing the available resources of data- or telecommunication networks. Especially topics like optimal routing, load balancing and fast recovery of routes in case of link failures are covered...... in communication networks. For instance, the results show that a network controlled by simulated ants can balance the load quickly and efficiently, thereby postponing local hot-spots or in some cases even avoid hot-spots. Furthermore, the results clearly demonstrate, that systems using simulated ants obtain...... a virtually unprecedented robustness. A network with failing components typically return to a fully operational state within seconds or even faster. Hopefully these results of the investigation of simulated ants - along with other results from the literature - can contribute to make the big manufacturers...

  16. Tunable Broadband Printed Carbon Transparent Conductor

    Science.gov (United States)

    Xu, Yue; Wan, Jiayu

    Transparent conductors have been widely applied in solar cells, transparent smart skins, and sensing/imaging antennas, etc. Carbon-based transparent conductor has attracted great attention for its low cost and broad range transparency. Ion intercalation has been known to highly dope graphitic materials, thereby tuning materials' optoelectronic properties. For the first time, we successfully tune the optical transmittance of a reduced graphene oxide (RGO)/CNT network from mid-IR range to visible range by means of Li-ion intercalation/deintercalation. We also observed a simultaneous increase of the electrical conductivity with the Li-ion intercalation. This printed carbon hybrid thin film was prepared through all solution processes and was easily scalable. This study demonstrates the possibility of using ion intercalation for low cost, tunable broadband transparent conductors.

  17. Broadband interferometer observations of a triggered lightning

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The development of positive leader of an artificially triggered lightning has been analyzed based on the data of electric field change, location of radiation source and frequency spectrum obtained by using the broadband interferometer system. The results indicate that radiation from positive leader could be detected within close distance in spite of the relatively weak radiation, while the radiation from negative breakdown processes was relatively stronger.Positive leader developed with few branches, and the initial progression velocity was of the order of 10s m/s. The distribution of power spectrum by 25 MHz high pass filter indicated that the radiation frequency from positive leader maximized at 25-30 MHz, while that from negative breakdown processes maximized at 60-70 MHz.

  18. Hot Carrier extraction with plasmonic broadband absorbers

    CERN Document Server

    Ng, Charlene; Dligatch, Svetlana; Roberts, Ann; Davis, Timothy J; Mulvaney, Paul; Gomez, Daniel E

    2016-01-01

    Hot charge carrier extraction from metallic nanostructures is a very promising approach for applications in photo-catalysis, photovoltaics and photodetection. One limitation is that many metallic nanostructures support a single plasmon resonance thus restricting the light-to-charge-carrier activity to a spectral band. Here we demonstrate that a monolayer of plasmonic nanoparticles can be assembled on a multi-stack layered configuration to achieve broad-band, near-unit light absorption, which is spatially localised on the nanoparticle layer. We show that this enhanced light absorbance leads to $\\sim$ 40-fold increases in the photon-to-electron conversion efficiency by the plasmonic nanostructures. We developed a model that successfully captures the essential physics of the plasmonic hot-electron charge generation and separation in these structures. This model also allowed us to establish that efficient hot carrier extraction is limited to spectral regions where the photons possessing energies higher than the S...

  19. Broadband Spectral Study of Magnetar Bursts

    Science.gov (United States)

    Kirmizibayrak, Demet; Gogus, Ersin; Sasmaz Mus, Sinem; Kaneko, Yuki

    2016-07-01

    Magnetar bursts occur sporadically on random occasions, and every burst-active episode carries unique information about the bursting magnetar. Therefore, in-depth spectral and temporal analyses of each of the magnetar bursts provide new insights into the bursting and radiation mechanisms. There have been a number of studies over the last decade, investigating the spectral and temporal properties of magnetar bursts. The spectra of typical magnetar bursts were generally described with the Comptonized model or the sum of two blackbody functions. However, it was recently shown that the actual spectral nature of these bursts can be conclusively determined if the spectral analysis is performed on a wide energy coverage. We present the results of in-depth systematic broadband (2 - 250 keV) spectral analysis of a large number of bursts originated from three magnetars: SGR 1806-20, SGR 1900+14, and SGR J1550-5418, observed with the Rossi X-ray Timing Explorer.

  20. Superconducting Quantum Arrays for Broadband RF Systems

    Science.gov (United States)

    Kornev, V.; Sharafiev, A.; Soloviev, I.; Kolotinskiy, N.; Mukhanov, O.

    2014-05-01

    Superconducting Quantum Arrays (SQAs), homogenous arrays of Superconducting Quantum Cells, are developed for implementation of broadband radio frequency (RF) systems capable of providing highly linear magnetic signal to voltage transfer with high dynamic range, including active electrically small antennas (ESAs). Among the proposed quantum cells which are bi-SQUID and Differential Quantum Cell (DQC), the latter delivered better performance for SQAs. A prototype of the transformer-less active ESA based on a 2D SQA with nonsuperconducting electric connection of the DQCs was fabricated using HYPRES niobium process with critical current density 4.5 kA/cm2. The measured voltage response is characterized by a peak-to-peak swing of ~100 mV and steepness of ~6500 μV/μT.

  1. Broadband phase-preserved optical elevator

    CERN Document Server

    Luo, Yuan; Zhang, Baile; Qiu, Cheng-Wei; Barbastathis, George

    2011-01-01

    Phase-preserved optical elevator is an optical device to lift up an entire plane virtually without distortion in light path or phase. Using transformation optics, we have predicted and observed the realization of such a broadband phase-preserved optical elevator, made of a natural homogeneous birefringent crystal without resorting to absorptive and narrowband metamaterials involving time-consuming nano-fabrication. In our demonstration, the optical elevator is designed to lift a sheet upwards, and the phase is verified to be preserved always. The camouflage capability is also demonstrated in the presence of adjacent objects of the same scale at will. The elevating device functions in different surrounding media over the wavelength range of 400-700 nm. Our work opens up prospects for studies of light trapping, solar energy, illusion optics, communication, and imaging.

  2. Broadband optical cooling of molecular rotors

    CERN Document Server

    Lien, Chien-Yu; Odom, Brian C

    2014-01-01

    Contrary to intuition, resonant laser excitation of bound electrons can decrease the temperature of a system, with electronic relaxation times as fast as nanoseconds allowing for rapid cooling to far below ambient temperature. Although laser cooling of atoms is routine owing to their relatively simple internal structure, laser cooling of molecular translational speeds, vibrations, or rotations is challenging because a different laser frequency is required to electronically excite each populated vibrational and rotational state. Here, we show that molecules with decoupled vibrational and electronic modes can be rotationally cooled using a single spectrally filtered broadband laser to simultaneously address many rotational states. We optically cool AlH$^+$ ions held in a room-temperature radiofrequency Paul trap to collect 96% of the population in the ground quantum state, corresponding to a rotational temperature of 4 K. In our current implementation, parity-preserving electronic cycling cools to the two lowes...

  3. Broadband acoustic cloak for ultrasound waves.

    Science.gov (United States)

    Zhang, Shu; Xia, Chunguang; Fang, Nicholas

    2011-01-14

    Invisibility devices based on coordinate transformation have opened up a new field of considerable interest. We present here the first practical realization of a low-loss and broadband acoustic cloak for underwater ultrasound. This metamaterial cloak is constructed with a network of acoustic circuit elements, namely, serial inductors and shunt capacitors. Our experiment clearly shows that the acoustic cloak can effectively bend the ultrasound waves around the hidden object, with reduced scattering and shadow. Because of the nonresonant nature of the building elements, this low-loss (∼6  dB/m) cylindrical cloak exhibits invisibility over a broad frequency range from 52 to 64 kHz. Furthermore, our experimental study indicates that this design approach should be scalable to different acoustic frequencies and offers the possibility for a variety of devices based on coordinate transformation.

  4. Future large broadband switched satellite communications networks

    Science.gov (United States)

    Staelin, D. H.; Harvey, R. R.

    1979-01-01

    Critical technical, market, and policy issues relevant to future large broadband switched satellite networks are summarized. Our market projections for the period 1980 to 2000 are compared. Clusters of switched satellites, in lieu of large platforms, etc., are shown to have significant advantages. Analysis of an optimum terrestrial network architecture suggests the proper densities of ground stations and that link reliabilities 99.99% may entail less than a 10% cost premium for diversity protection at 20/30 GHz. These analyses suggest that system costs increase as the 0.6 power of traffic. Cost estimates for nominal 20/30 GHz satellite and ground facilities suggest optimum system configurations might employ satellites with 285 beams, multiple TDMA bands each carrying 256 Mbps, and 16 ft ground station antennas. A nominal development program is outlined.

  5. Frequency Doubling Broadband Light in Multiple Crystals

    Energy Technology Data Exchange (ETDEWEB)

    ALFORD,WILLIAM J.; SMITH,ARLEE V.

    2000-07-26

    The authors compare frequency doubling of broadband light in a single nonlinear crystal with doubling in five crystals with intercrystal temporal walk off compensation, and with doubling in five crystals adjusted for offset phase matching frequencies. Using a plane-wave, dispersive numerical model of frequency doubling they study the bandwidth of the second harmonic and the conversion efficiency as functions of crystal length and fundamental irradiance. For low irradiance the offset phase matching arrangement has lower efficiency than a single crystal of the same total length but gives a broader second harmonic bandwidth. The walk off compensated arrangement gives both higher conversion efficiency and broader bandwidth than a single crystal. At high irradiance, both multicrystal arrangements improve on the single crystal efficiency while maintaining broad bandwidth.

  6. Modeling Broadband motions from the Tohoku earthquake

    Science.gov (United States)

    Li, D.; Chu, R.; Graves, R. W.; Helmberger, D. V.; Clayton, R. W.

    2011-12-01

    The 2011 M9 Tohoku earthquake produced an extraordinary dataset of over 2000 broadband regional and teleseismic records. While considerable progress has been made in modeling the longer period (>3 s) waveforms, the shorter periods (1-3 s) prove more difficult. Since modeling high frequency waveforms in 3D is computationally expensive, we follow the approach proposed by Helmberger and Vidale (1988), which interfaces the Cagniard-de Hoop analytical source description with a 2D numerical code to account for earthquake radiation patterns. We extend this method to a staggered grid finite difference code, which is stable in the presence of water. The code adapts the Convolutional PML boundary condition, and uses the "following the wavefront" technique and multiple GPUs, which significantly reduces computing time. We test our method against existing 1D and 3D codes, and examine the effects of slab structure, ocean bathymetry and local basins in an attempt to better explain the observed shorter period response.

  7. Broadband plasmon induced transparency in terahertz metamaterials

    KAUST Repository

    Zhu, Zhihua

    2013-04-25

    Plasmon induced transparency (PIT) could be realized in metamaterials via interference between different resonance modes. Within the sharp transparency window, the high dispersion of the medium may lead to remarkable slow light phenomena and an enhanced nonlinear effect. However, the transparency mode is normally localized in a narrow frequency band, which thus restricts many of its applications. Here we present the simulation, implementation, and measurement of a broadband PIT metamaterial functioning in the terahertz regime. By integrating four U-shape resonators around a central bar resonator, a broad transparency window across a frequency range greater than 0.40 THz is obtained, with a central resonance frequency located at 1.01 THz. Such PIT metamaterials are promising candidates for designing slow light devices, highly sensitive sensors, and nonlinear elements operating over a broad frequency range. © 2013 IOP Publishing Ltd.

  8. Policy factors affecting broadband development in Poland

    DEFF Research Database (Denmark)

    Henten, Anders; Windekilde, Iwona Maria

    2014-01-01

    is to reduce the gap between Poland and other EU Member Countries in the area of the development and implementation of information and communication technologies. However, Poland’s accession to the European Union and the implementation of EU regulation mechanisms accelerate the integration of Poland...... – with the ‘lightest’forms of intervention first and the ‘strongest’at the end. Furthermore, empirical evidence on the developments in access technologies and the policy initiatives taken by the Polish government are presented. Finally, there is a conclusion regarding the importance of the different types of public......Poland joined the EU in 2004 and still has one of the Europe’s least developed information societies. Broadband penetration in Poland is still amongst the lowest in the EU and significantly below the EU average. Considering the present state of information technology, the key challenge for Poland...

  9. Hollow glass waveguides for broadband infrared transmission.

    Science.gov (United States)

    Abel, T; Hirsch, J; Harrington, J A

    1994-07-15

    Broadband hollow glass waveguides have been fabricated with losses as low as 0.15 dB/m at 10.6 microm. We make these hollow glass waveguides by coating the inside of polyimide-coated silica-glass tubing with a metallic layer followed by a thin dielectric coating of a metal halide. The bore sizes of the guides range from 320 to 700 microm, and we have made lengths as long as 3 m. The bending radii of the waveguides are less than 5 cm for bore sizes less than 500 microm. We have used these waveguides to deliver greater than 80 W of CO(2) laser power and 5 W of Er:YAG laser power. The hollow glass guides are inexpensive, robust, and quite flexible and therefore a good infrared fiber for power and sensor applications.

  10. Broadband Dielectric Spectroscopy on Human Blood

    CERN Document Server

    Wolf, M; Lunkenheimer, P; Loidl, A

    2011-01-01

    Dielectric spectra of human blood reveal a rich variety of dynamic processes. Achieving a better characterization and understanding of these processes not only is of academic interest but also of high relevance for medical applications as, e.g., the determination of absorption rates of electromagnetic radiation by the human body. The dielectric properties of human blood are studied using broadband dielectric spectroscopy, systematically investigating the dependence on temperature and hematocrit value. By covering a frequency range from 1 Hz to 40 GHz, information on all the typical dispersion regions of biological matter is obtained. We find no evidence for a low-frequency relaxation (alpha-relaxation) caused, e.g., by counterion diffusion effects as reported for some types of biological matter. The analysis of a strong Maxwell-Wagner relaxation arising from the polarization of the cell membranes in the 1-100 MHz region (beta-relaxation) allows for the test of model predictions and the determination of variou...

  11. QR Factorization for the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Jakub Kurzak

    2009-01-01

    Full Text Available The QR factorization is one of the most important operations in dense linear algebra, offering a numerically stable method for solving linear systems of equations including overdetermined and underdetermined systems. Modern implementations of the QR factorization, such as the one in the LAPACK library, suffer from performance limitations due to the use of matrix–vector type operations in the phase of panel factorization. These limitations can be remedied by using the idea of updating of QR factorization, rendering an algorithm, which is much more scalable and much more suitable for implementation on a multi-core processor. It is demonstrated how the potential of the cell broadband engine can be utilized to the fullest by employing the new algorithmic approach and successfully exploiting the capabilities of the chip in terms of single instruction multiple data parallelism, instruction level parallelism and thread-level parallelism.

  12. Techno-Economics of Residential Broadband Deployment

    DEFF Research Database (Denmark)

    Sigurdsson, Halldor Matthias

    2007-01-01

    on account of their existing telecom network (”the raw copper”), and typically they will prefer an xDSL-based strategy (various types of Digital Subscriber Line-technology: ADSL, VDSL, etc.), where the rate of speed of data connections are increased gradually to 10-50 Mbit/s or even more, in order to gain...... as much profit as possible from their previous investments. However, the operators are not restricted by these considerations, and they will often choose a fiber-to-the home solution (FTTH, Fiber-to-the-Home), which will offer a far more substantial data capacity in the long run. The choice of the proper...... broadband deployment strategy is depending on a complexed set of parameters, and there is a demand for precise techno-economic cost models estimating financial feasibility. The existing cost models do not consider the dynamic developments in the market caused by competition. The PhD thesis has a profound...

  13. Energy Efficient Evolution of Mobile Broadband Networks

    DEFF Research Database (Denmark)

    Micallef, Gilbert

    costs, which, based on increasing energy prices and necessary network upgrades are likely to increase. Since base station sites make up for about 75% of the power consumption in mobile networks, studies are focused on this specific network element. A number of factors believed to play a role...... the v impact of replacing old equipment. Results show that an aggressive replacement strategy and the upgrade of sites to remote radio head can restrain the increase in power consumption of the network to just 17%. In addition to upgrading equipment, mobile network operators can further reduce power...... network capacity evolution path, replacing old and less efficient equipment, and enabling power saving features, can all considerably reduce the power consumption of future mobile broadband networks. Studies and recommendations presented within this thesis demonstrate that it is realistic for mobile...

  14. Broadband Nonlinear Signal Processing in Silicon Nanowires

    DEFF Research Database (Denmark)

    Yvind, Kresten; Pu, Minhao; Hvam, Jørn Märcher;

    The fast non-linearity of silicon allows Tbit/s optical signal processing. By choosing suitable dimensions of silicon nanowires their dispersion can be tailored to ensure a high nonlinearity at power levels low enough to avoid significant two-photon abso We have fabricated low insertion and propa......The fast non-linearity of silicon allows Tbit/s optical signal processing. By choosing suitable dimensions of silicon nanowires their dispersion can be tailored to ensure a high nonlinearity at power levels low enough to avoid significant two-photon abso We have fabricated low insertion...... and propagation loss silicon nanowires and use them to demonstrate the broadband capabilities of silicon....

  15. Broadband Multifocal Conic-Shaped Metalens

    CERN Document Server

    Bao, Yanjun; Fang, Zheyu

    2016-01-01

    Compared with lens with one focal point,multifocal lens has a lower focusing quality with high background noise. This is arisen from the construction of multifocal lens, which is usually divided into several zones, with each corresponds to one focal point.Light passing through different zones cannot constructively interfere at the foci, resulting in a decreased optical performance. Here, we propose two multifocal metalenses with nanoslits arranged in an ellipse and a hyperbola, with both are able to focus incident light at their multifoci constructively, giving a better focusing properties than that designed by conventional methods.We theoretically and experimentally demonstrate that, within a broadband wavelength range (600-900 nm), the ellipse-shaped metalens (ESM) can focus the lights with opposite circular polarizations (CP) at its two focal points, respectively,while a hyperbola-shaped metalens (HSM) can only focus one particular CP light at its both foci, simultaneously.This types of conic-shaped metale...

  16. The broadband spectrum of Centaurus X-3

    Science.gov (United States)

    Gottlieb, Amy; Pottschmidt, Katja; Marcu, Diana; Wolff, Michael Thomas; Kühnel, Matthias; Falkner, Sebastian; Britton Hemphill, Paul; Suchy, Slawomir; Becker, Peter A.; Wood, Kent S.; Wilms, Joern

    2016-04-01

    We present an analysis of a Suzaku observation of the accreting pulsar and high mass X-ray binary Centaurus X-3. The observation was performed in 2008 and covers one 2.1 day binary orbit. Strong flux and hardness variability is present in the energy range from 0.8 to 60 keV. We selected a part of the observation covering ~40% of the first half of the orbit during which the spectral shape was stable and less absorbed than during other parts of the observation. We confirm earlier results that the broadband spectrum can be modeled with acutoff power law modified by a partial absorber, three iron lines -- from near-neutral, helium-like, and hydrogen-like iron --, and a cyclotron resonant scattering line at 30 keV. The pulse profile shows a shift above the cyclotron line energy which is qualitatively consistent with recent theoretical predictions. In addition we findthat the presence of the so-called ``13 keV'' bump is model dependent and that there are indications for further line-like spectral components at 1 keV and 6 keV and a broader residual around 2 keV. We also apply the newly implemented radiation dominated radiative shock model for luminous accretion pulsars by Becker and Wolff (2007, ApJ 654, 435) to model the broadband spectrum. Replacing the cutoff power law with the physical continuum while retaining all other components we obtain a similar goodness of fit as before. From the physical continuum model we determine a mass accretion rate of ~2.17 x 10^17 g/s, an accretion column radius of 65 (+12, -4) m, and a temperature of the accreted plasma of 3.1 (+0.4, -0.1) keV.

  17. Broadband midinfrared frequency comb with tooth scanning

    Science.gov (United States)

    Lee, Kevin F.; Masłowski, P.; Mills, A.; Mohr, C.; Jiang, Jie; Schunemann, Peter G.; Fermann, M. E.

    2015-03-01

    Frequency combs are a massively parallel source of extremely accurate optical frequencies. Frequency combs generally operate at the visible or near-infrared wavelengths, but fundamental molecular vibrations occur at midinfrared wavelengths. We demonstrate an optically-referenced, broadband midinfrared frequency comb based on a doublyresonant optical parametric oscillator (OPO). By tuning the wavelength of the reference laser, the comb line frequencies are tuned as well. By scanning the reference wavelength, any frequency can be accessed, not just the frequencies of the base comb. Combined with our comb-resolving Fourier transform spectrometer, we can measure 200 wavenumber wide broadband absorption spectra with 200 kHz linewidth comb teeth. Our OPO is pumped by an amplified Tm fiber frequency comb, with phase-locked carrier envelope offset frequency, and repetition rate fixed by phase-locking a frequency comb line to a narrow linewidth diode laser at a telecom channel. The frequency comb is referenced to GPS by long-term stabilization of the repetition rate to a selected value using the temperature of the reference laser as the control. The resulting pump comb is about 3W of 100 fs pulses at 418 MHz repetition rate at 1950 nm. Part of the comb is used for supercontinuum generation for frequency stabilization, and the rest pumps an orientation-patterned gallium arsenide (OP-GaAs) crystal in a doubly-resonant optical parametric oscillator cavity, yielding collinear signal and idler beams from about 3 to 5.5 μm. We verify comb scanning by resolving the 200 MHz wide absorption lines of the entire fundamental CO vibrational manifold at 11 Torr pressure.

  18. Broadband fitting approach for the application of supercontinuum broadband laser absorption spectroscopy to combustion environments

    Science.gov (United States)

    Göran Blume, Niels; Ebert, Volker; Dreizler, Andreas; Wagner, Steven

    2016-01-01

    In this work, a novel broadband fitting approach for quantitative in-flame measurements using supercontinuum broadband laser absorption spectroscopy (SCLAS) is presented. The application and verification of this approach in an atmospheric, laminar, non-premixed CH4/air flame (Wolfhard-Parker burner, WHP) is discussed. The developed fitting scheme allows for an automatic recognition and fitting of a B-spline curve reference intensity for SCLAS broadband measurements while automatically removing the influence of absorption peaks. This approach improves the fitting residual locally (in between absorption lines) and globally by 23% and 13% respectively, while improving the in-flame SNR by a factor of 2. Additionally, the approach inherently improves the time-wavelength-correlation based on recorded in-flame measurements itself in combination with a theoretical spectrum of the analyte. These improvements have allowed for the recording of complete spatially resolved methane concentration profiles in the WHP burner. Comparison of the measured absolute mole fraction profile for methane with previously measured reference data shows excellent agreement in position, shape and absolute values. These improvements are a prerequisite for the application of SCLAS in high-pressure combustion systems.

  19. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    Science.gov (United States)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  20. CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset

    NARCIS (Netherlands)

    Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.

    2012-01-01

    We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var

  1. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...

  2. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  3. Infinite variance in fermion quantum Monte Carlo calculations

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  4. Testing for causality in variance using multivariate GARCH models

    NARCIS (Netherlands)

    C.M. Hafner (Christian); H. Herwartz

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual

  5. Variance Ranklets : Orientation-selective rank features for contrast modulations

    NARCIS (Netherlands)

    Azzopardi, George; Smeraldi, Fabrizio

    2009-01-01

    We introduce a novel type of orientation–selective rank features that are sensitive to contrast modulations (second–order stimuli). Variance Ranklets are designed in close analogy with the standard Ranklets, but use the Siegel–Tukey statistics for dispersion instead of the Wilcoxon statistics. Their

  6. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...

  7. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s

  8. Average local values and local variances in quantum mechanics

    CERN Document Server

    Muga, J G; Sala, P R

    1998-01-01

    Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.

  9. Common Persistence and Error-Correction Mode in Conditional Variance

    Institute of Scientific and Technical Information of China (English)

    LI Han-dong; ZHANG Shi-ying

    2001-01-01

    We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.

  10. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....

  11. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity si

  12. Infinite variance in fermion quantum Monte Carlo calculations.

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  13. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was int...

  14. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    ... the drum.\\3\\ \\3\\ This variance adopts the definition of, and specifications for, fleet angle from... definition of ``static drop test'' specified by section 3 (``Definitions'') and the static drop test... FURTHER INFORMATION CONTACT: General information and press inquiries. For general information and...

  15. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  16. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  17. Gender variance in Asia: discursive contestations and legal implications

    NARCIS (Netherlands)

    S.E. Wieringa

    2010-01-01

    A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli

  18. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic

  19. Broadband tonpilz underwater acoustic transducers based on multimode optimization

    DEFF Research Database (Denmark)

    Yao, Qingshan; Jensen, Leif Bjørnø

    1997-01-01

    Head flapping has often been considered to be deleterious for obtaining a tonpilz transducer with broadband, high power performance. In the present work, broadband, high power tonpilz transducers have been designed using the finite element (FE) method. Optimized vibrational modes including...... the flapping mode of the head are effectively used to achieve the broadband performance. The behavior of the transducer in its longitudinal piston mode and in its flapping mode is analysed for in-air and in-water situations. For the 37.8% bandwidth of the center frequency from 28.5 to 41.8 kHz, the amplitude...

  20. Dose variation during solar minimum

    Energy Technology Data Exchange (ETDEWEB)

    Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H. (Phillips Lab., Geophysics Directorate, Hanscom Air Force Base, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics)

    1991-12-01

    In this paper, the authors use direct measurement of dose to show the variation in inner and outer radiation belt populations at low altitude from 1984 to 1987. This period includes the recent solar minimum that occurred in September 1986. The dose is measured behind four thicknesses of aluminum shielding and for two thresholds of energy deposition, designated HILET and LOLET. The authors calculate an average dose per day for each month of satellite operation. The authors find that the average proton (HILET) dose per day (obtained primarily in the inner belt) increased systematically from 1984 to 1987, and has a high anticorrelation with sunspot number when offset by 13 months. The average LOLET dose per day behind the thinnest shielding is produced almost entirely by outer zone electrons and varies greatly over the period of interest. If any trend can be discerned over the 4 year period it is a decreasing one. For shielding of 1.55 gm/cm{sup 2} (227 mil) Al or more, the LOLET dose is complicated by contributions from {gt} 100 MeV protons and bremsstrahlung.

  1. Exploring the limits of broadband excitation and inversion: II. Rf-power optimized pulses

    Science.gov (United States)

    Kobzar, Kyryl; Skinner, Thomas E.; Khaneja, Navin; Glaser, Steffen J.; Luy, Burkhard

    2008-09-01

    In [K. Kobzar, T.E. Skinner, N. Khaneja, S.J. Glaser, B. Luy, Exploring the limits of broadband excitation and inversion, J. Magn. Reson. 170 (2004) 236-243], optimal control theory was employed in a systematic study to establish physical limits for the minimum rf-amplitudes required in broadband excitation and inversion pulses. In a number of cases, however, experimental schemes are not limited by rf-amplitudes, but by the overall rf-power applied to a sample. We therefore conducted a second systematic study of excitation and inversion pulses of varying pulse durations with respect to bandwidth and rf-tolerances, but this time using a modified algorithm involving restricted rf-power. The resulting pulses display a variety of pulse shapes with highly modulated rf-amplitudes and generally show better performance than corresponding pulses with identical pulse length and rf-power, but limited rf-amplitude. A detailed description of pulse shapes and their performance is given for the so-called power-BEBOP and power-BIBOP pulses.

  2. Graphene based tunable fractal Hilbert curve array broadband radar absorbing screen for radar cross section reduction

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xianjun, E-mail: xianjun.huang@manchester.ac.uk [School of Electrical and Electronic Engineering, University of Manchester, Manchester M13 9PL (United Kingdom); College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073 (China); Hu, Zhirun [School of Electrical and Electronic Engineering, University of Manchester, Manchester M13 9PL (United Kingdom); Liu, Peiguo [College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073 (China)

    2014-11-15

    This paper proposes a new type of graphene based tunable radar absorbing screen. The absorbing screen consists of Hilbert curve metal strip array and chemical vapour deposition (CVD) graphene sheet. The graphene based screen is not only tunable when the chemical potential of the graphene changes, but also has broadband effective absorption. The absorption bandwidth is from 8.9GHz to 18.1GHz, ie., relative bandwidth of more than 68%, at chemical potential of 0eV, which is significantly wider than that if the graphene sheet had not been employed. As the chemical potential varies from 0 to 0.4eV, the central frequency of the screen can be tuned from 13.5GHz to 19.0GHz. In the proposed structure, Hilbert curve metal strip array was designed to provide multiple narrow band resonances, whereas the graphene sheet directly underneath the metal strip array provides tunability and averagely required surface resistance so to significantly extend the screen operation bandwidth by providing broadband impedance matching and absorption. In addition, the thickness of the screen has been optimized to achieve nearly the minimum thickness limitation for a nonmagnetic absorber. The working principle of this absorbing screen is studied in details, and performance under various incident angles is presented. This work extends applications of graphene into tunable microwave radar cross section (RCS) reduction applications.

  3. A Broadband Polyvinylidene Difluoride-Based Hydrophone with Integrated Readout Circuit for Intravascular Photoacoustic Imaging.

    Science.gov (United States)

    Daeichin, Verya; Chen, Chao; Ding, Qing; Wu, Min; Beurskens, Robert; Springeling, Geert; Noothout, Emile; Verweij, Martin D; van Dongen, Koen W A; Bosch, Johan G; van der Steen, Antonius F W; de Jong, Nico; Pertijs, Michiel; van Soest, Gijs

    2016-05-01

    Intravascular photoacoustic (IVPA) imaging can visualize the coronary atherosclerotic plaque composition on the basis of the optical absorption contrast. Most of the photoacoustic (PA) energy of human coronary plaque lipids was found to lie in the frequency band between 2 and 15 MHz requiring a very broadband transducer, especially if a combination with intravascular ultrasound is desired. We have developed a broadband polyvinylidene difluoride (PVDF) transducer (0.6 × 0.6 mm, 52 μm thick) with integrated electronics to match the low capacitance of such a small polyvinylidene difluoride element (<5 pF/mm(2)) with the high capacitive load of the long cable (∼100 pF/m). The new readout circuit provides an output voltage with a sensitivity of about 3.8 μV/Pa at 2.25 MHz. Its response is flat within 10 dB in the range 2 to 15 MHz. The root mean square (rms) output noise level is 259 μV over the entire bandwidth (1-20 MHz), resulting in a minimum detectable pressure of 30 Pa at 2.25 MHz.

  4. Spectroscopic benzene detection using a broadband monolithic DFB-QCL array

    Science.gov (United States)

    Lewicki, Rafał; Witinski, Mark; Li, Biao; Wysocki, Gerard

    2016-03-01

    Quantitative laser spectroscopic measurements of complex molecules that have a broad absorption spectra require broadly tunable laser sources operating preferably in the mid-infrared molecular fingerprint region. In this paper a novel broadband mid-infrared laser source comprising of an array of single-mode distributed feedback quantum cascade lasers was used to target a broadband absorption feature of benzene (C6H6), a toxic and carcinogenic atmospheric pollutant. The DFB-QCL array is a monolithic semiconductor device with no opto-mechanical components, which eliminates issues with mechanical vibrations. The DFB-QCLs array used in this work provides spectral coverage from 1022.5 cm-1 to 1053.3 cm-1, which is sufficient to access the absorption feature of benzene at 1038 cm-1 (9.64 μm). A sensor prototype based on a 76 m multipass cell (AMAC-76LW, Aerodyne Research) and a dispersive DFB-QCL array beam combiner was developed and tested. The Allan deviation analysis of the retrieved benzene concentration data yields a short-term precision of 100 ppbv/Hz1/2 and a minimum detectable concentration of 12 ppbv for 200 s averaging time. The system was also tested by sampling atmospheric air as well as vapors of different chemical products that contained traces of benzene.

  5. Fast Joint DOA and Pitch Estimation Using a Broadband MVDR Beamformer

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2013-01-01

    non-stationary speech signals in noisy conditions. In this paper, a joint DOA and pitch estimation (JDPE) method is proposed. The method is based on the minimum variance distortionless response (MVDR) beamformer in the frequency-domain and is much faster than previous joint methods, as it only...... methods combining existing DOA and pitch estimators.......The harmonic model, i.e., a sum of sinusoids having frequencies that are integer multiples of the pitch, has been widely used for modeling of voiced speech. In microphone arrays, the direction-of-arrival (DOA) adds an additional parameter that can help in obtaining a robust procedure for tracking...

  6. Near-infrared broad-band cavity enhanced absorption spectroscopy using a superluminescent light emitting diode.

    Science.gov (United States)

    Denzer, W; Hamilton, M L; Hancock, G; Islam, M; Langley, C E; Peverall, R; Ritchie, G A D

    2009-11-01

    A fibre coupled near-infrared superluminescent light emitting diode that emits approximately 10 mW of radiation between 1.62 and 1.7 microm is employed in combination with a broad-band cavity enhanced spectrometer consisting of a linear optical cavity with mirrors of reflectivity approximately 99.98% and either a dispersive near-infrared spectrometer or a Fourier transform interferometer. Results are presented on the absorption of 1,3-butadiene, and sensitivities are achieved of 6.1 x 10(-8) cm(-1) using the dispersive spectrometer in combination with phase-sensitive detection, and 1.5 x 10(-8) cm(-1) using the Fourier transform interferometer (expressed as a minimum detectable absorption coefficient) over several minutes of acquisition time.

  7. Broadband back grating design for thin film solar cells

    KAUST Repository

    Janjua, Bilal

    2013-01-01

    In this paper, design based on tapered circular grating structure was studied, to provide broadband enhancement in thin film amorphous silicon solar cells. In comparison to planar structure an absorption enhancement of ~ 7% was realized.

  8. Multicarrier Block-Spread CDMA for Broadband Cellular Downlink

    NARCIS (Netherlands)

    Petré, F.; Leus, G.; Moonen, M.; De Man, H.

    2004-01-01

    Effective suppression of multiuser interference (MUI) and mitigation of frequency-selective fading effects within the complexity constraints of the mobile constitute major challenges for broadband cellular downlink transceiver design. Existing wideband direct-sequence (DS) code division multiple acc

  9. Exploiting Narrowband Efficiency for Broadband Convolutive Blind Source Separation

    Directory of Open Access Journals (Sweden)

    Aichner Robert

    2007-01-01

    Full Text Available Based on a recently presented generic broadband blind source separation (BSS algorithm for convolutive mixtures, we propose in this paper a novel algorithm combining advantages of broadband algorithms with the computational efficiency of narrowband techniques. By selective application of the Szegö theorem which relates properties of Toeplitz and circulant matrices, a new normalization is derived as a special case of the generic broadband algorithm. This results in a computationally efficient and fast converging algorithm without introducing typical narrowband problems such as the internal permutation problem or circularity effects. Moreover, a novel regularization method for the generic broadband algorithm is proposed and subsequently also derived for the proposed algorithm. Experimental results in realistic acoustic environments show improved performance of the novel algorithm compared to previous approximations.

  10. Twisted optical metamaterials for planarized ultrathin broadband circular polarizers.

    Science.gov (United States)

    Zhao, Y; Belkin, M A; Alù, A

    2012-05-29

    Optical metamaterials are usually based on planarized, complex-shaped, resonant nano-inclusions. Three-dimensional geometries may provide a wider set of functionalities, including broadband chirality to manipulate circular polarization at the nanoscale, but their fabrication becomes challenging as their dimensions get smaller. Here we introduce a new paradigm for the realization of optical metamaterials, showing that three-dimensional effects may be obtained without complicated inclusions, but instead by tailoring the relative orientation within the lattice. We apply this concept to realize planarized, broadband bianisotropic metamaterials as stacked nanorod arrays with a tailored rotational twist. Because of the coupling among closely spaced twisted plasmonic metasurfaces, metamaterials realized with conventional lithography may effectively operate as three-dimensional helical structures with broadband bianisotropic optical response. The proposed concept is also shown to relax alignment requirements common in three-dimensional metamaterial designs. The realized sample constitutes an ultrathin, broadband circular polarizer that may be directly integrated within nanophotonic systems.

  11. Broadband Reflective Coating Process for Large FUVOIR Mirrors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ZeCoat Corporation will develop and demonstrate a set of revolutionary coating processes for making broadband reflective coatings suitable for very large mirrors (4+...

  12. Broadband Wireless Data Acquisition and Control Device Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Mobitrum is proposing to develop a broadband wireless device for real-time data acquisition and monitoring applicable to the field instrumentation and control...

  13. Minimum wages, globalization and poverty in Honduras

    OpenAIRE

    Gindling, T. H.; Terrell, Katherine

    2008-01-01

    To be competitive in the global economy, some argue that Latin American countries need to reduce or eliminate labour market regulations such as minimum wage legislation because they constrain job creation and hence increase poverty. On the other hand, minimum wage increases can have a direct positive impact on family income and may therefore help to reduce poverty. We take advantage of a complex minimum wage system in a poor country that has been exposed to the forces of globalization to test...

  14. Economic evaluation of broadband distribution networks to the home

    Science.gov (United States)

    Merk, Charles A.

    1992-02-01

    Economic wideband, linear fiber optic transmitters and receivers pave the way for broadband to the home. The diamond network architecture (DNA) delivers 1 GHz bandwidth. This provides standard video, HDTV, and switched two-way broadband digital services to the home. An economic model is presented using the DNA that considers the impact of digital TV, HDTV, and the evolution of switched voice and data services on a CATV system.

  15. NREL Spectral Standards Development and Broadband Radiometric Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Myers, D. R.; Andreas, A.; Stoffel, T.; Reda, I.; Wilcox, S.; Gotseff, P.; Kay, B.; Gueymard, C.

    2003-05-01

    We describe a final version of revisions to current ASTM reference standard spectral distributions used to evaluate photovoltaic device performance. An NREL-developed graphical user interface for working with the SMARTS2 spectral model has been developed and is being tested. A proposed ASTM reference Ultraviolet (UV) spectra for materials durability is presented. Improvements in broadband outdoor radiometer calibration, characterization, and reporting software reduce uncertainties in broadband radiometer calibrations.

  16. Peer to peer networking in Ethernet broadband access networks

    OpenAIRE

    Damola, Ayodele

    2005-01-01

    The use of peer-to-peer (P2P) applications is growing dramatically, particularly for sharing content such as video, audio, and software. The traffic generated by these applications represents a large proportion of Internet traffic. For the broadband access network providers P2P traffic presents several problems. This thesis identifies the performance and business issues that P2P traffic has on broadband access networks employing the McCircuit separation technique. A mechanism for managing P2P...

  17. The return of the variance: intraspecific variability in community ecology.

    Science.gov (United States)

    Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie

    2012-04-01

    Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.

  18. Optimization of radio astronomical observations using Allan variance measurements

    CERN Document Server

    Schieder, R

    2001-01-01

    Stability tests based on the Allan variance method have become a standard procedure for the evaluation of the quality of radio-astronomical instrumentation. They are very simple and simulate the situation when detecting weak signals buried in large noise fluctuations. For the special conditions during observations an outline of the basic properties of the Allan variance is given, and some guidelines how to interpret the results of the measurements are presented. Based on a rather simple mathematical treatment clear rules for observations in ``Position-Switch'', ``Beam-'' or ``Frequency-Switch'', ``On-The-Fly-'' and ``Raster-Mapping'' mode are derived. Also, a simple ``rule of the thumb'' for an estimate of the optimum timing for the observations is found. The analysis leads to a conclusive strategy how to plan radio-astronomical observations. Particularly for air- and space-borne observatories it is very important to determine, how the extremely precious observing time can be used with maximum efficiency. The...

  19. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  20. Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  1. Variance reduction methods applied to deep-penetration problems

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.

  2. Convergence of Recursive Identification for ARMAX Process with Increasing Variances

    Institute of Scientific and Technical Information of China (English)

    JIN Ya; LUO Guiming

    2007-01-01

    The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.

  3. Sample variance and Lyman-alpha forest transmission statistics

    CERN Document Server

    Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick

    2012-01-01

    We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.

  4. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  5. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these ......This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...... to these two steps. Strong consistency is established under weak moment conditions, while sixth order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are indeed necessary....

  6. Fidelity between Gaussian mixed states with quantum state quadrature variances

    Science.gov (United States)

    Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao

    2016-04-01

    In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).

  7. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  8. Climate variance influence on the non-stationary plankton dynamics.

    Science.gov (United States)

    Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine

    2013-08-01

    We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.

  9. Analysis of variance in spectroscopic imaging data from human tissues.

    Science.gov (United States)

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  10. Automated Extraction of Archaeological Traces by a Modified Variance Analysis

    Directory of Open Access Journals (Sweden)

    Tiziana D'Orazio

    2015-03-01

    Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.

  11. A Variance Based Active Learning Approach for Named Entity Recognition

    Science.gov (United States)

    Hassanzadeh, Hamed; Keyvanpour, Mohammadreza

    The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.

  12. 7 CFR 35.11 - Minimum requirements.

    Science.gov (United States)

    2010-01-01

    ..., Denmark, East Germany, England, Finland, France, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein..., Switzerland, Wales, West Germany, Yugoslavia), or Greenland shall meet each applicable minimum requirement...

  13. Effect of Pressure on Minimum Fluidization Velocity

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhiping; Na Yongjie; Lu Qinggang

    2007-01-01

    Minimum fluidization velocity of quartz sand and glass bead under different pressures of 0.5, 1.0, 1.5 and 2.0 Mpa were investigated. The minimum fluidization velocity decreases with the increasing of pressure. The influence of pressure to the minimum fluidization velocities is stronger for larger particles than for smaller ones.Based on the test results and Ergun equation, an experience equation of minimum fluidization velocity is proposed and the calculation results are comparable to other researchers' results.

  14. Analysis of Variance in the Modern Design of Experiments

    Science.gov (United States)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  15. Seasonal variance in P system models for metapopulations

    Institute of Scientific and Technical Information of China (English)

    Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri

    2007-01-01

    Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.

  16. VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM

    OpenAIRE

    RANJU KANWAR; SAMEKSHA BHASKAR

    2013-01-01

    In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through th...

  17. Recombining binomial tree for constant elasticity of variance process

    OpenAIRE

    Hi Jun Choe; Jeong Ho Chu; So Jeong Shin

    2014-01-01

    The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...

  18. Estimation of broadband surface emissivity from narrowband emissivities.

    Science.gov (United States)

    Tang, Bo-Hui; Wu, Hua; Li, Chuanrong; Li, Zhao-Liang

    2011-01-01

    This work analyzed and addressed the estimate of the broadband emissivities for the spectral domains 3-14μm (ε(3-14)) and 3-∞μm (ε(3-∞). Two linear narrow-to-broadband conversion models were proposed to estimate broadband emissivities ε(3-14) and ε(3-∞) using the Moderate Resolution Imaging Spectroradiometer (MODIS) derived emissivities in three thermal infrared channels 29 (8.4-8.7μm), 31 (10.78-11.28μm) and 32 (11.77-12.27μm). Two independent spectral libraries, the Advanced Spaceborne Thermal Emission Reflection Radiometer (ASTER) spectral library and the MODIS UCSB (University of California, Santa Barbara) emissivity library, were used to calibrate and validate the proposed models. Comparisons of the estimated broadband emissivities using the proposed models and the calculated values from the spectral libraries, showed that the proposed method of estimation of broadband emissivity has potential accuracy and the Root Mean Square Error (RMSE) between estimated and calculated broadband emissivities is less than 0.01 for both ε(3-14) and ε(3-∞).

  19. Broadband monitoring simulation with massively parallel processors

    Science.gov (United States)

    Trubetskov, Mikhail; Amotchkina, Tatiana; Tikhonravov, Alexander

    2011-09-01

    Modern efficient optimization techniques, namely needle optimization and gradual evolution, enable one to design optical coatings of any type. Even more, these techniques allow obtaining multiple solutions with close spectral characteristics. It is important, therefore, to develop software tools that can allow one to choose a practically optimal solution from a wide variety of possible theoretical designs. A practically optimal solution provides the highest production yield when optical coating is manufactured. Computational manufacturing is a low-cost tool for choosing a practically optimal solution. The theory of probability predicts that reliable production yield estimations require many hundreds or even thousands of computational manufacturing experiments. As a result reliable estimation of the production yield may require too much computational time. The most time-consuming operation is calculation of the discrepancy function used by a broadband monitoring algorithm. This function is formed by a sum of terms over wavelength grid. These terms can be computed simultaneously in different threads of computations which opens great opportunities for parallelization of computations. Multi-core and multi-processor systems can provide accelerations up to several times. Additional potential for further acceleration of computations is connected with using Graphics Processing Units (GPU). A modern GPU consists of hundreds of massively parallel processors and is capable to perform floating-point operations efficiently.

  20. Broadband Spectral Analysis of Aql X-1

    CERN Document Server

    Raichur, H; Dewangan, G

    2011-01-01

    We present the results of a broadband spectral study of the transient Low Mass X-ray Binary Aql X-1 observed by Suzaku and Rossi X-ray Timing Explorer satellites. The source was observed during its 2007 outburst in the High/Soft (Banana) state and in the Low/Hard (Extreme Island) state. Both the Banana state and the Extreme Island state spectra are best described by a two component model consisting of a soft multi-colour blackbody emission likely originating from the accretion disk and a harder Comptonized emission from the boundary layer. Evidence for a hard tail (extending to ~50 keV) is found during the Banana state; this further (transient) component, accounting for atleast ~1.5% of the source luminosity, is modeled by a power-law. Aql X-1 is the second Atoll source after GX 13+1 to show a high energy tail. The presence of a weak but broad Fe line provides further support for a standard accretion disk extending nearly to the neutron star surface. The input photons for the Comptonizing boundary layer could...

  1. Hollow plasmonic antennas for broadband SERS spectroscopy.

    Science.gov (United States)

    Messina, Gabriele C; Malerba, Mario; Zilio, Pierfrancesco; Miele, Ermanno; Dipalo, Michele; Ferrara, Lorenzo; De Angelis, Francesco

    2015-01-01

    The chemical environment of cells is an extremely complex and multifaceted system that includes many types of proteins, lipids, nucleic acids and various other components. With the final aim of studying these components in detail, we have developed multiband plasmonic antennas, which are suitable for highly sensitive surface enhanced Raman spectroscopy (SERS) and are activated by a wide range of excitation wavelengths. The three-dimensional hollow nanoantennas were produced on an optical resist by a secondary electron lithography approach, generated by fast ion-beam milling on the polymer and then covered with silver in order to obtain plasmonic functionalities. The optical properties of these structures have been studied through finite element analysis simulations that demonstrated the presence of broadband absorption and multiband enhancement due to the unusual geometry of the antennas. The enhancement was confirmed by SERS measurements, which showed a large enhancement of the vibrational features both in the case of resonant excitation and out-of-resonance excitation. Such characteristics indicate that these structures are potential candidates for plasmonic enhancers in multifunctional opto-electronic biosensors.

  2. Broadband Acoustic Cloak for Ultrasound Waves

    CERN Document Server

    Zhang, Shu; Fang, Nicholas

    2010-01-01

    Invisibility devices based on coordinate transformation have opened up a new field of considerable interest. Such a device is proposed to render the hidden object undetectable under the flow of light or sound, by guiding and controlling the wave path through an engineered space surrounding the object. We present here the first practical realization of a low-loss and broadband acoustic cloak for underwater ultrasound. This metamaterial cloak is constructed with a network of acoustic circuit elements, namely serial inductors and shunt capacitors. Our experiment clearly shows that the acoustic cloak can effectively bend the ultrasound waves around the hidden object, with reduced scattering and shadow. Due to the non-resonant nature of the building elements, this low loss (~6dB/m) cylindrical cloak exhibits excellent invisibility over a broad frequency range from 52 to 64 kHz in the measurements. The low visibility of the cloaked object for underwater ultrasound shed a light on the fundamental understanding of ma...

  3. Broadband Linear Polarization of Jupiter Trojans

    CERN Document Server

    Bagnulo, S; Stinson, A; Christou, A; Borisov, G B

    2016-01-01

    Trojan asteroids orbit in the Lagrange points of the system Sun-planet-asteroid. Their dynamical stability make their physical properties important proxies for the early evolution of our solar system. To study their origin, we want to characterize the surfaces of Jupiter Trojan asteroids and check possible similarities with objects of the main belt and of the Kuiper Belt. We have obtained high-accuracy broad-band linear polarization measurements of six Jupiter Trojans of the L4 population and tried to estimate the main features of their polarimetric behaviour. We have compared the polarimetric properties of our targets among themselves, and with those of other atmosphere-less bodies of our solar system. Our sample show approximately homogeneous polarimetric behaviour, although some distinct features are found between them. In general, the polarimetric properties of Trojan asteroids are similar to those of D- and P-type main-belt asteroids. No sign of coma activity is detected in any of the observed objects. A...

  4. Silver conical helix broadband plasmonic nanoantenna

    Science.gov (United States)

    Sobhkhiz, Nader; Moshaii, Ahmad

    2014-01-01

    The discrete dipole approximation method is used to investigate the optical extinction spectra and the electric field enhancement of Ag conical helix (CH) nanostructures. Based on an expected similarity between the radio frequency response of the antenna with the infrared and the visible response of the nanoantenna, the Ag CH nanostructures were designed as a broadband nanoantenna. It is shown that with engineering the structure parameters of the CH nanostructure the plasmonic response of the nanostructure can be designed for a desirable application. In addition, the change of the substrate material for the nanohelix growth is shown to have infinitesimal effect on the resonance peaks of the conical nanohelix. However, varying the surrounding medium can lead to considerable red-shifting of the plasmonic resonance peaks (up to 230 nm). Calculations of the near field around the helical nanoantenna show that the smaller and the larger sides of the CH are related to the plasmonic resonance peaks at low and high wavelengths, respectively. The calculation result for the extinction spectrum has also been compared with similar experimental data for a 2-pitch Ag conical nanohelix and a relatively good agreement between the numerical calculation and the experiment has been obtained.

  5. Broadband Observations of High Redshift Blazars

    CERN Document Server

    Paliya, Vaidehi S; Fabian, A C; Stalin, C S

    2016-01-01

    We present a multi-wavelength study of four high redshift blazars, S5 0014+81 ($z=3.37$), CGRaBS J0225+1846 ($z=2.69$), BZQ J1430+4205 ($z=4.72$), and 3FGL J1656.2$-$3303 ($z=2.40$), using the quasi-simultaneous data from {\\it Swift}, {\\it NuSTAR}, and {\\it Fermi}-Large Area Telescope (LAT) and also the archival {\\it XMM-Newton} observations. Other than 3FGL J1656.2$-$3303, none of the sources were known as $\\gamma$-ray emitters and our analysis of $\\sim$7.5 years of LAT data reveals the first time detection of the statistically significant $\\gamma$-ray emission from CGRaBS J0225+1846. We generate the broadband spectral energy distributions (SED) of all the objects, centering at the epoch of {\\it NuSTAR} observations and reproduce them using a one zone leptonic emission model. The optical$-$UV emission in all the objects can be explained by the radiation from the accretion disk, whereas, X-ray to $\\gamma$-ray window of the SEDs are found to be dominated by the inverse Compton scattering off the broad line reg...

  6. Broadband Infrared Heterodyne Spectrometer: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, C G; Cunningham, C T; Tringe, J W

    2010-12-16

    This report summarizes the most important results of our effort to develop a new class of infrared spectrometers based on a novel broadband heterodyne design. Our results indicate that this approach could lead to a near-room temperature operation with performance limited only by quantum noise carried by the incoming signal. Using a model quantum-well infrared photodetector (QWIP), we demonstrated key performance features of our approach. For example, we directly measured the beat frequency signal generated by superimposing local oscillator (LO) light of one frequency and signal light of another through a spectrograph, by injecting the LO light at a laterally displaced input location. In parallel with the development of this novel spectrometer, we modeled a new approach to reducing detector volume though plasmonic resonance effects. Since dark current scales directly with detector volume, this ''photon compression'' can directly lead to lower currents. Our calculations indicate that dark current can be reduced by up to two orders of magnitude in an optimized ''superlens'' structure. Taken together, our spectrometer and dark current reduction strategies provide a promising path toward room temperature operation of a mid-wave and possibly long-wave infrared spectrometer.

  7. Broadband access technology for passive optical network

    Science.gov (United States)

    Chi, Sien; Yeh, Chien-Hung; Chow, Chi-Wai

    2009-01-01

    We will introduce four related topics about fiber access network technologies for PONs. First, an upstream signal powerequalizer is proposed and designed using a FP-LD in optical line terminal applied to the TDM-PON, and a 20dB dynamic upstream power range from -5 to -25dBm having a 1.7dB maximal power variation is retrieved. The fiber-fault protection is also an important issue for PON. We investigate a simple and cost-effective TDM/WDM PON system with self-protected function. Next, using RSOA-based colorless WDM-PON is also demonstrated. We propose a costeffective CW light source into RSOA for 2.5Gb/s upstream in WDM-PON together with self-healing mechanism against fiber fault. Finally, we investigate a 4Gb/s OFDM-QAM for both upstream and downstream traffic in long-reach WDM/TDM PON system under 100km transmission without dispersion compensation. As a result, we believe that these key access technologies are emerging and useful for the next generation broadband FTTH networks.

  8. Silicon Micromachined Sensor for Broadband Vibration Analysis

    Science.gov (United States)

    Gutierrez, Adolfo; Edmans, Daniel; Cormeau, Chris; Seidler, Gernot; Deangelis, Dave; Maby, Edward

    1995-01-01

    The development of a family of silicon based integrated vibration sensors capable of sensing mechanical resonances over a broad range of frequencies with minimal signal processing requirements is presented. Two basic general embodiments of the concept were designed and fabricated. The first design was structured around an array of cantilever beams and fabricated using the ARPA sponsored multi-user MEMS processing system (MUMPS) process at the Microelectronics Center of North Carolina (MCNC). As part of the design process for this first sensor, a comprehensive finite elements analysis of the resonant modes and stress distribution was performed using PATRAN. The dependence of strain distribution and resonant frequency response as a function of Young's modulus in the Poly-Si structural material was studied. Analytical models were also studied. In-house experimental characterization using optical interferometry techniques were performed under controlled low pressure conditions. A second design, intended to operate in a non-resonant mode and capable of broadband frequency response, was proposed and developed around the concept of a cantilever beam integrated with a feedback control loop to produce a null mode vibration sensor. A proprietary process was used to integrat a metal-oxide semiconductor (MOS) sensing device, with actuators and a cantilever beam, as part of a compatible process. Both devices, once incorporated as part of multifunction data acquisition and telemetry systems will constitute a useful system for NASA launch vibration monitoring operations. Satellite and other space structures can benefit from the sensor for mechanical condition monitoring functions.

  9. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  10. Dynamic Programming Using Polar Variance for Image Segmentation.

    Science.gov (United States)

    Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J

    2016-10-06

    When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.

  11. Relationship between Allan variances and Kalman Filter parameters

    Science.gov (United States)

    Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.

    1984-01-01

    A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.

  12. Measuring primordial non-gaussianity without cosmic variance

    CERN Document Server

    Seljak, Uros

    2008-01-01

    Non-gaussianity in the initial conditions of the universe is one of the most powerful mechanisms to discriminate among the competing theories of the early universe. Measurements using bispectrum of cosmic microwave background anisotropies are limited by the cosmic variance, i.e. available number of modes. Recent work has emphasized the possibility to probe non-gaussianity of local type using the scale dependence of large scale bias from highly biased tracers of large scale structure. However, this power spectrum method is also limited by cosmic variance, finite number of structures on the largest scales, and by the partial degeneracy with other cosmological parameters that can mimic the same effect. Here we propose an alternative method that solves both of these problems. It is based on the idea that on large scales halos are biased, but not stochastic, tracers of dark matter: by correlating a highly biased tracer of large scale structure against an unbiased tracer one eliminates the cosmic variance error, wh...

  13. Genetic variance of tolerance and the toxicant threshold model.

    Science.gov (United States)

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

  14. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  15. Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse

    Institute of Scientific and Technical Information of China (English)

    Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou

    2016-01-01

    In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.

  16. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  17. Automated Variance Reduction Applied to Nuclear Well-Logging Problems

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, John C [ORNL; Peplow, Douglas E. [ORNL; Evans, Thomas M [ORNL

    2009-01-01

    The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is

  18. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  19. Genetically controlled environmental variance for sternopleural bristles in Drosophila melanogaster - an experimental test of a heterogeneous variance model

    DEFF Research Database (Denmark)

    Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker

    2007-01-01

    quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...

  20. 78 FR 11793 - Minimum Internal Control Standards

    Science.gov (United States)

    2013-02-20

    ... National Indian Gaming Commission 25 CFR Part 543 RIN 3141-AA27 Minimum Internal Control Standards AGENCY... (NIGC) proposes to amend its minimum internal control standards for Class II gaming under the Indian... Internal Control Standards. 64 FR 590. The rule added a new part to the Commission's...

  1. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  2. Stochastic variational approach to minimum uncertainty states

    CERN Document Server

    Illuminati, F; Illuminati, F; Viola, L

    1995-01-01

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.

  3. Modelling and control of broadband traffic using multiplicative multifractal cascades

    Indian Academy of Sciences (India)

    P Murali Krishna; Vikram M Gadre; Uday B Desai

    2002-12-01

    We present the results on the modelling and synthesis of broadband traffic processes namely ethernet inter-arrival times using the VVGM (variable variance gaussian multiplier) multiplicative multifractal model. This model is shown to be more appropriate for modelling network traffic which possess time varying scaling/self-similarity and burstiness. The model gives a simple and efficient technique to synthesise Ethernet inter-arrival times. The results of the detailed statistical and multifractal analysis performed on the original and the synthesised traces are presented and the performance is compared with other models in the literature, such as the Poisson process, and the Multifractal Wavelet Model (MWM) process. It is also shown empirically that a single server queue preserves the multifractal character of the process by analysing its inter-departure process when fed with the multifractal traces. The result of the existence of a global-scaling exponent for multifractal cascades and its application in queueing theory are discussed. We propose tracking and control algorithms for controlling network congestion with bursty traffic modelled by multifractal cascade processes, characterised by the Holder exponents, the value of which at an interval indicates the burstiness in the traffic at that point. This value has to be estimated and used for the estimation of the congestion and predictive control of the traffic in broadband networks. The estimation can be done by employing wavelet transforms and a Kalman filter based predictor for predicting the burstiness of the traffic.

  4. Broadband assessment of degree-2 gravitational changes from GRACE and other estimates, 2002-2015

    Science.gov (United States)

    Chen, J. L.; Wilson, C. R.; Ries, J. C.

    2016-03-01

    Space geodetic measurements, including the Gravity Recovery and Climate Experiment (GRACE), satellite laser ranging (SLR), and Earth rotation provide independent and increasingly accurate estimates of variations in Earth's gravity field Stokes coefficients ΔC21, ΔS21, and ΔC20. Mass redistribution predicted by climate models provides another independent estimate of air and water contributions to these degree-2 changes. SLR has been a successful technique in measuring these low-degree gravitational changes. Broadband comparisons of independent estimates of ΔC21, ΔS21, and ΔC20 from GRACE, SLR, Earth rotation, and climate models during the GRACE era from April 2002 to April 2015 show that the current GRACE release 5 solutions of ΔC21 and ΔS21 provided by the Center for Space Research (CSR) are greatly improved over earlier solutions and agree remarkably well with other estimates, especially on ΔS21 estimates. GRACE and Earth rotation ΔS21 agreement is exceptionally good across a very broad frequency band from intraseasonal, seasonal, to interannual and decadal periods. SLR ΔC20 estimates remain superior to GRACE and Earth rotation estimates, due to the large uncertainty in GRACE ΔC20 solutions and particularly high sensitivity of Earth rotation ΔC20 estimates to errors in the wind fields. With several estimates of ΔC21, ΔS21, and ΔC20 variations, it is possible to estimate broadband noise variance and noise power spectra in each, given reasonable assumptions about noise independence. The GRACE CSR release 5 solutions clearly outperform other estimates of ΔC21 and ΔS21 variations with the lowest noise levels over a broad band of frequencies.

  5. Chiral Molecules Revisited by Broadband Microwave Spectroscopy

    Science.gov (United States)

    Schnell, Melanie

    2014-06-01

    Chiral molecules have fascinated chemists for more than 150 years. While their physical properties are to a very good approximation identical, the two enantiomers of a chiral molecule can have completely different (bio)chemical activities. For example, the right-handed enantiomer of carvone smells of spearmint while the left-handed one smells of caraway. In addition, the active components of many drugs are of one specific handedness, such as in the case of ibuprofen. However, in nature as well as in pharmaceutical applications, chiral molecules often exist in mixtures with other chiral molecules. The analysis of these complex mixtures to identify the molecular components, to determine which enantiomers are present, and to measure the enantiomeric excesses (ee) remains a challenging task for analytical chemistry, despite its importance for modern drug development. We present here a new method of differentiating enantiomers of chiral molecules in the gas phase based on broadband rotational spectroscopy. The phase of the acquired signal bares the signature of the enantiomer, as it depends upon the combined quantity, μ_a μ_b μ_c, which is of opposite sign between enantiomers. It thus also provides information on the absolute configuration of the particular enantiomer. Furthermore, the signal amplitude is proportional to the ee. A significant advantage of our technique is its inherent mixture compatibility due to the fingerprint-like character of rotational spectra. In this contribution, we will introduce the technique and present our latest results on chiral molecule spectroscopy and enantiomer differentiation. D. Patterson, M. Schnell, J.M. Doyle, Nature 497 (2013) 475-477 V.A. Shubert, D. Schmitz, D. Patterson, J.M. Doyle, M. Schnell, Angewandte Chemie International Edition 53 (2014) 1152-1155

  6. A graphene-based broadband optical modulator

    Science.gov (United States)

    Liu, Ming; Yin, Xiaobo; Ulin-Avila, Erick; Geng, Baisong; Zentgraf, Thomas; Ju, Long; Wang, Feng; Zhang, Xiang

    2011-06-01

    Integrated optical modulators with high modulation speed, small footprint and large optical bandwidth are poised to be the enabling devices for on-chip optical interconnects. Semiconductor modulators have therefore been heavily researched over the past few years. However, the device footprint of silicon-based modulators is of the order of millimetres, owing to its weak electro-optical properties. Germanium and compound semiconductors, on the other hand, face the major challenge of integration with existing silicon electronics and photonics platforms. Integrating silicon modulators with high-quality-factor optical resonators increases the modulation strength, but these devices suffer from intrinsic narrow bandwidth and require sophisticated optical design; they also have stringent fabrication requirements and limited temperature tolerances. Finding a complementary metal-oxide-semiconductor (CMOS)-compatible material with adequate modulation speed and strength has therefore become a task of not only scientific interest, but also industrial importance. Here we experimentally demonstrate a broadband, high-speed, waveguide-integrated electroabsorption modulator based on monolayer graphene. By electrically tuning the Fermi level of the graphene sheet, we demonstrate modulation of the guided light at frequencies over 1GHz, together with a broad operation spectrum that ranges from 1.35 to 1.6µm under ambient conditions. The high modulation efficiency of graphene results in an active device area of merely 25µm2, which is among the smallest to date. This graphene-based optical modulation mechanism, with combined advantages of compact footprint, low operation voltage and ultrafast modulation speed across a broad range of wavelengths, can enable novel architectures for on-chip optical communications.

  7. Broadband homodecoupled heteronuclear multiple bond correlation spectroscopy.

    Science.gov (United States)

    Sakhaii, Peyman; Haase, Burkhard; Bermel, Wolfgang

    2013-03-01

    A general concept for removing proton-proton scalar J couplings in 2D NMR spectroscopy is proposed. The idea is based on introducing an additional J resolved dimension into the pulse sequence of a conventional 2D experiment to design a pseudo 3D NMR experiment. The practical demonstration is exemplified on the widely used gradient coherence selected heteronuclear long-range correlation spectroscopy (HMBC). We refer to this type of pulse sequence as tilt HMBC experiment. For every (13)C chemical shift evolution increment, a homonuclear J resolved experiment is recorded. The long-range defocusing delay of the HMBC pulse sequence is exploited to implement this building block. The J resolved evolution period is incremented in a way very similar to ACCORDION spectroscopy to accommodate the buildup of heteronuclear long-range antiphase magnetisation as well. After Fourier transformation in all dimensions the spectra are tilted in the J resolved dimension. Finally, a projection along the J resolved dimension is calculated leading to almost disappearance of proton-proton spin multiplicities in the 2D tilt HMBC spectrum. The tilt HMBC experiment combines sensitivity with simple experimental setup and can be recorded with short recycle delays, when combined with Ernst angle excitation. The recorded spectra display singlet proton signals for long-range correlation peaks making an unambiguous signal assignment much easier. In addition to the new experiment a simple processing technique is applied to efficiently suppress the noise originating from forward linear prediction in the indirect evolution dimensions. In case of issues with fast repetition times, probe heating and RF power handling most of the RF pulses can be replaced by broadband, frequency swept pulses operating at much lower power.

  8. The Representation of a Broadband Vector Field

    Institute of Scientific and Technical Information of China (English)

    Qunyan Ren; Jean Pierre Hermand; Shengchun Piao

    2011-01-01

    Compared to a scalar pressure sensor,a vector sensor can provide a higher signal-to-noise ratio (SNR)signal and more detailed information on the sound field.Study on vector sensors and their applications have become a hot topic.Research on the representation of a vector field is highly relevant for extending the scope of vector sensor technology.This paper discusses the range-frequency distribution of the vector field due to a broadband acoustic source moving in a shallow-water waveguide as the self noise of a surface ship,and the vector extension of the waveguide impulse response measured over a limited frequency range using an active source of known waveform.From theory analysis and numerical simulation,the range-frequency representation of a vector field exhibits an interference structure qualitatively similar to that of the corresponding pressure field but,being quantitatively different,provides additional information on the waveguide,especially through the vertical component.For the range-frequency representation,physical quantities that can better exhibit the interference characteristics of the waveguide are the products of pressure and particle velocity and of the pressure and pressure gradient.An image processing method to effectively detect and isolate the individual striations from an interference structure was reviewed briefly.The representation of the vector impulse response was discussed according to two different measurement systems,also known as particle velocity and pressure gradient.The vector impulse response representation can not only provide additional information from pressure only but even more than that of the range-frequency representation.

  9. Regression between earthquake magnitudes having errors with known variances

    Science.gov (United States)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  10. How Good Is Crude MDL for Solving the Bias-Variance Dilemma? An Empirical Investigation Based on Bayesian Networks

    Science.gov (United States)

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204

  11. Two-dimensional finite-element temperature variance analysis

    Science.gov (United States)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  12. Risk Management - Variance Minimization or Lower Tail Outcome Elimination

    DEFF Research Database (Denmark)

    Aabo, Tom

    2002-01-01

    on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess...

  13. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...... show that for this class of processes the optimal endowment and strategy can be expressed more explicitly. The corresponding formulas involve the moment resp. cumulant generating function of the underlying process and a Laplace- or Fourier-type representation of the contingent claim. An example...

  14. Local orbitals by minimizing powers of the orbital variance

    DEFF Research Database (Denmark)

    Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;

    2011-01-01

    It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may...

  15. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  16. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    points in P. This problem is a crucial part of modern ecological analyses; each point in P represents a species in d-dimensional trait space, and the goal is to compute the statistics of a geometric measure on this trait space, when subsets of species are selected under random processes. We present...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...

  17. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  18. Critical points of multidimensional random Fourier series: Variance estimates

    Science.gov (United States)

    Nicolaescu, Liviu I.

    2016-08-01

    We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.

  19. Multivariate variance targeting in the BEKK-GARCH model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus S.; Rahbæk, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...... to these two steps. Strong consis-tency is established under weak moment conditions, while sixth-order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are necessary...

  20. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...

  1. A Mean-Variance Portfolio Optimal Under Utility Pricing

    Directory of Open Access Journals (Sweden)

    Hürlimann Werner

    2006-01-01

    Full Text Available An expected utility model of asset choice, which takes into account asset pricing, is considered. The obtained portfolio selection problem under utility pricing is solved under several assumptions including quadratic utility, exponential utility and multivariate symmetric elliptical returns. The obtained unique solution, called optimal utility portfolio, is shown mean-variance efficient in the classical sense. Various questions, including conditions for complete diversification and the behavior of the optimal portfolio under univariate and multivariate ordering of risks as well as risk-adjusted performance measurement, are discussed.

  2. The minimum work requirement for distillation processes

    Energy Technology Data Exchange (ETDEWEB)

    Yunus, Cerci; Yunus, A. Cengel; Byard, Wood [Nevada Univ., Las Vegas, NV (United States). Dept. of Mechanical Engineering

    2000-07-01

    A typical ideal distillation process is proposed and analyzed using the first and second-laws of thermodynamics with particular attention to the minimum work requirement for individual processes. The distillation process consists of an evaporator, a condenser, a heat exchanger, and a number of heaters and coolers. Several Carnot engines are also employed to perform heat interactions of the distillation process with the surroundings and determine the minimum work requirement for processes. The Carnot engines give the maximum possible work output or the minimum work input associated with the processes, and therefore the net result of these inputs and outputs leads to the minimum work requirement for the entire distillation process. It is shown that the minimum work relation for the distillation process is the same as the minimum work input relation found by Cerci et al [1] for an incomplete separation of incoming saline water, and depends only on the properties of the incoming saline water and the outgoing pure water and brine. Also, certain aspects of the minimum work relation found are discussed briefly. (authors)

  3. EXPERIMENTAL STUDY OF MINIMUM IGNITION TEMPERATURE

    Directory of Open Access Journals (Sweden)

    Igor WACHTER

    2015-12-01

    Full Text Available The aim of this scientific paper is an analysis of the minimum ignition temperature of dust layer and the minimum ignition temperatures of dust clouds. It could be used to identify the threats in industrial production and civil engineering, on which a layer of combustible dust could occure. Research was performed on spent coffee grounds. Tests were performed according to EN 50281-2-1:2002 Methods for determining the minimum ignition temperatures of dust (Method A. Objective of method A is to determine the minimum temperature at which ignition or decomposition of dust occurs during thermal straining on a hot plate at a constant temperature. The highest minimum smouldering and carbonating temperature of spent coffee grounds for 5 mm high layer was determined at the interval from 280 °C to 310 °C during 600 seconds. Method B is used to determine the minimum ignition temperature of a dust cloud. Minimum ignition temperature of studied dust was determined to 470 °C (air pressure – 50 kPa, sample weight 0.3 g.

  4. Broadband superior electromagnetic absorption of a discrete-structure microwave coating

    Science.gov (United States)

    Duan, Yuping; Xi, Qun; Liu, Wei; Wang, Tongmin

    2016-10-01

    A method of improving the electromagnetic (EM) absorption property of conventional microwave absorber (CMA) is proposed here. The structural design process was mainly concerned with systematic analysis and research into the impedance matching characteristic and induced current. By processing a CMA-carbonyl-iron powder (CIP) coating into many isolated regions, the discrete-structure microwave absorber (DMA) had a much better absorption property than the corresponding CMA. When the thickness was only 2.0 mm and the component content was 33 wt%, the loss of reflection was less than -10 dB shifted from 6-7 GHz to 7-13 GHz and the loss of minimum reflection decreased from 12.5 dB lost to 32 dB lost through a discrete-structure process. The microwave absorption properties of coatings with different component contents and thicknesses were investigated. The minimum reflection peaks tended to shift towards the lower frequency region as CIP content or coating thickness increased. By adjusting these three factors, a high-performance broadband absorber was produced.

  5. Spatial-temporal dynamics of broadband terahertz Bessel beam propagation

    Science.gov (United States)

    Semenova, V. A.; Kulya, M. S.; Bespalov, V. G.

    2016-08-01

    The unique properties of narrowband and broadband terahertz Bessel beams have led to a number of their applications in different fields, for example, for the depth of focusing and resolution enhancement in terahertz imaging. However, broadband terahertz Bessel beams can probably be also used for the diffraction minimization in the short-range broadband terahertz communications. For this purpose, the study of spatial-temporal dynamics of the broadband terahertz Bessel beams is needed. Here we present a simulation-based study of the propagating in non-dispersive medium broadband Bessel beams generated by a conical axicon lens. The algorithm based on scalar diffraction theory was used to obtain the spatial amplitude and phase distributions of the Bessel beam in the frequency range from 0.1 to 3 THz at the distances 10-200 mm from the axicon. Bessel beam field is studied for the different spectral components of the initial pulse. The simulation results show that for the given parameters of the axicon lens one can obtain the Gauss-Bessel beam generation in the spectral range from 0.1 to 3 THz. The length of non-diffraction propagation for a different spectral components was measured, and it was shown that for all spectral components of the initial pulse this length is about 130 mm.

  6. The Broadband Spectral Variability of Holmberg IX X-1

    CERN Document Server

    Walton, D J; Harrison, F A; Middleton, M J; Fabian, A C; Bachetti, M; Barret, D; Miller, J M; Ptak, A; Rana, V; Stern, D; Tao, L

    2016-01-01

    We present the results from four new broadband X-ray observations of the extreme ultraluminous X-ray source Holmberg IX X-1 ($L_{\\rm{X}} > 10^{40}$ erg s$^{-1}$), performed by the $Suzaku$ and $NuSTAR$ observatories in coordination. Combined with the two prior observations coordinating $XMM$-$Newton$, $Suzaku$ and $NuSTAR$, we now have broadband observations of this remarkable source from six separate epochs. Two of these new observations probe lower fluxes than seen previously, allowing us to extend our knowledge of the broadband spectral variability exhibited by Holmberg IX X-1. The broadband spectra are well fit by two thermal blackbody components, which dominate the emission below 10 keV, as well as a steep ($\\Gamma \\sim 3.5$) powerlaw tail which dominates above $\\sim$15 keV. Remarkably, while the 0.3-10.0 keV flux varies by a factor of $\\sim$3 between all these epochs, the 15-40 keV flux varies by only $\\sim$20%. Although the spectral variability is strongest in the $\\sim$1-10 keV band, the broadband var...

  7. Does the Minimum Wage Cause Inefficient Rationing?

    Institute of Scientific and Technical Information of China (English)

    何满辉; 梁明秋

    2008-01-01

    By not allowing wages to dearthe labor market,the minimum wage could cause workers with low reservation wages to be rationed out while equally skilled woTkers with higher reservation wages are employed.I find that proxies for reservation wages of unskilled workers in high-impact stales did not rise relative to reservation wages in other states,suggesting that the increase in the minimum wage did not cause jobs to be allocated less efficiently.However,even if rationing is efficient,the minimum wage can still entail other efficiency costs.

  8. Minimum emittance in TBA and MBA lattices

    Science.gov (United States)

    Xu, Gang; Peng, Yue-Mei

    2015-03-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.

  9. Infinite Variance in Fermion Quantum Monte Carlo Calculations

    CERN Document Server

    Shi, Hao

    2015-01-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...

  10. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  11. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  12. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)

    2010-05-15

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.

  13. Cosmic variance in the nanohertz gravitational wave background

    CERN Document Server

    Roebber, Elinore; Holz, Daniel; Warren, Michael

    2015-01-01

    We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes to estimate the formation rates of supermassive black hole binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive ($\\lesssim10\\%$) to cosmological parameters, with only slight variation between WMAP5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of $\\sim 2$. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly-generated populations of supermassive black holes, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of $\\sim 10$ near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty ...

  14. Worldwide variance in the potential utilization of Gamma Knife radiosurgery.

    Science.gov (United States)

    Hamilton, Travis; Dade Lunsford, L

    2016-12-01

    OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.

  15. Cosmic variance of the galaxy cluster weak lensing signal

    CERN Document Server

    Gruen, D; Becker, M R; Friedrich, O; Mana, A

    2015-01-01

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_200m=10^14...10^15 h^-1 M_sol, z=0.25...0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ~20 per cent uncertainty from cosmic variance alone at M_200m=10^15 h^-1 M_sol and z=0.25), but significant also...

  16. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  17. VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM

    Directory of Open Access Journals (Sweden)

    RANJU KANWAR

    2013-04-01

    Full Text Available In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through this work, it is investigated that for longer transmission distance, 40-Gb/s systems are more sensitive to nonlinear phase noise as compared to 50-Gb/s systems. Also, when transmitting the data through the fiber optic link, bit errors are produced due to various effects such as noise from optical amplifiers and nonlinearity occurring in fiber. On the basis of the simulation results , we have compared the bit error rate based on 8-PSK with theoretical results, and result shows that in real time approach, the bit error rate is high for the same signal to noise ratio. MATLAB software is used to validate the analytical expressions for the variance of nonlinear phase noise.

  18. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  19. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  20. Swift/BAT and MAXI/GSC Broadband Transient Monitor

    CERN Document Server

    Sakamoto, Takanori; Mihara, Tatehiro; Yoshida, Atsumasa; Arimoto, Makoto; Barthelmy, Scott D; Kawai, Nobuyuki; Krimm, Hans A; Nakahira, Satoshi; Serino, Motoko

    2015-01-01

    We present the newly developed broadband transient monitor using the Swift Burst Alert Telescope (BAT) and the MAXI Gas Slit Camera (GSC) data. Our broadband transient monitor monitors high energy transient sources from 2 keV to 200 keV in seven energy bands by combining the BAT (15-200 keV) and the GSC (2-20 keV) data. Currently, the daily and the 90-minute (one orbit) averaged light curves are available for 106 high energy transient sources. Our broadband transient monitor is available to the public through our web server, http://yoshidalab.mydns.jp/bat_gsc_trans_mon/, for a wider use by the community. We discuss the daily sensitivity of our monitor and possible future improvements to our pipeline.

  1. Perfect and broadband acoustic absorption by critical coupling

    CERN Document Server

    Romero-García, V; Richoux, O; Merkel, A; Tournat, V; Pagneux, V

    2015-01-01

    We experimentally and analytically report broadband and narrowband perfect absorption in two different acoustic waveguide-resonator geometries by the mechanism of critical coupling. In the first geometry the resonator (a Helmholtz resonator) is side-loaded to the waveguide and it has a moderate quality factor. In the second geometry the resonator (a viscoelastic porous plate) is in-line loaded and it contains two resonant modes with low quality factor. The interplay between the energy leakage of the resonant modes into the waveguide and the inherent losses of the system reveals a perfect and a broadband nearly perfect absorption. The results shown in this work can motivate relevant research for the design of broadband perfect absorbers in other domains of wave physics.

  2. Broadband and Resonant Approaches to Axion Dark Matter Detection.

    Science.gov (United States)

    Kahn, Yonatan; Safdi, Benjamin R; Thaler, Jesse

    2016-09-30

    When ultralight axion dark matter encounters a static magnetic field, it sources an effective electric current that follows the magnetic field lines and oscillates at the axion Compton frequency. We propose a new experiment to detect this axion effective current. In the presence of axion dark matter, a large toroidal magnet will act like an oscillating current ring, whose induced magnetic flux can be measured by an external pickup loop inductively coupled to a SQUID magnetometer. We consider both resonant and broadband readout circuits and show that a broadband approach has advantages at small axion masses. We estimate the reach of this design, taking into account the irreducible sources of noise, and demonstrate potential sensitivity to axionlike dark matter with masses in the range of 10^{-14}-10^{-6}  eV. In particular, both the broadband and resonant strategies can probe the QCD axion with a GUT-scale decay constant.

  3. Broadband enhancement of infrared absorption in microbolometers using Ag nanocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Hyun, Jerome K. [Department of Chemistry and Nano Science, Ewha Womans University, Seoul 120-750 (Korea, Republic of); Ahn, Chi Won; Kim, Woo Choong; Kim, Tae Hyun; Hyun, Moon Seop; Kim, Hee Yeoun, E-mail: hyeounkim@nnfc.re.kr, E-mail: jhpark@nnfc.re.kr; Park, Jae Hong, E-mail: hyeounkim@nnfc.re.kr, E-mail: jhpark@nnfc.re.kr [National Nano-Fab Center, Daejeon 305-701 (Korea, Republic of); Lee, Won-Oh [S-Package Solution Co., Ltd., Daegu 702-701 (Korea, Republic of)

    2015-12-21

    High performance microbolometers are widely sought for thermal imaging applications. In order to increase the performance limits of microbolometers, the responsivity of the device to broadband infrared (IR) radiation needs to be improved. In this work, we report a simple, quick, and cost-effective approach to modestly enhance the broadband IR response of the device by evaporating Ag nanocrystals onto the light entrance surface of the device. When irradiated with IR light, strong fields are built up within the gaps between adjacent Ag nanocrystals. These fields resistively generate heat in the nanocrystals and underlying substrate, which is transduced into an electrical signal via a resistive sensing element in the device. Through this method, we are able to enhance the IR absorption over a broadband spectrum and improve the responsivity of the device by ∼11%.

  4. Broadband and Resonant Approaches to Axion Dark Matter Detection

    Science.gov (United States)

    Kahn, Yonatan; Safdi, Benjamin R.; Thaler, Jesse

    2016-09-01

    When ultralight axion dark matter encounters a static magnetic field, it sources an effective electric current that follows the magnetic field lines and oscillates at the axion Compton frequency. We propose a new experiment to detect this axion effective current. In the presence of axion dark matter, a large toroidal magnet will act like an oscillating current ring, whose induced magnetic flux can be measured by an external pickup loop inductively coupled to a SQUID magnetometer. We consider both resonant and broadband readout circuits and show that a broadband approach has advantages at small axion masses. We estimate the reach of this design, taking into account the irreducible sources of noise, and demonstrate potential sensitivity to axionlike dark matter with masses in the range of 10-14-10-6 e V . In particular, both the broadband and resonant strategies can probe the QCD axion with a GUT-scale decay constant.

  5. Oxygen minimum seafloor ecological (mal) functioning

    Digital Repository Service at National Institute of Oceanography (India)

    Moodley, L.; Nigam, R.; Ingole, B.S.; PrakashBabu, C.; Panchang, R.; Nanajkar, M.; Sivadas, S.; van Breugel, P.; van Ijzerloo, L.; Rutgers, R.; Heip, C.H.R.; Soetaert, K.; Middelburg, J.J.

    under certain oceanic settings such as the oxygen minimum zones (OMZ) where OM accumulates in underlying sediments. A basic question that remains is as to what extent this Corg accumulation reflects ecological ‘malfunctioning’ or a shunting of ecological...

  6. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  7. Graph theory for FPGA minimum configurations

    Institute of Scientific and Technical Information of China (English)

    Ruan Aiwu; Li Wenchang; Xiang Chuanyin; Song Jiangmin; Kang Shi; Liao Yongbo

    2011-01-01

    A traditional bottom-up modeling method for minimum configuration numbers is adopted for the study of FPGA minimum configurations.This method is limited ifa large number of LUTs and multiplexers are presented.Since graph theory has been extensively applied to circuit analysis and test,this paper focuses on the modeling FPGA configurations.In our study,an internal logic block and interconnections of an FPGA are considered as a vertex and an edge connecting two vertices in the graph,respectively.A top-down modeling method is proposed in the paper to achieve minimum configuration numbers for CLB and IOB.Based on the proposed modeling approach and exhaustive analysis,the minimum configuration numbers for CLB and IOB are five and three,respectively.

  8. Impact of Cosmic Variance on the Galaxy-Halo Connection for Lyman-$\\alpha$ Emitters

    CERN Document Server

    Mejia-Restrepo, Julian E

    2016-01-01

    In this paper we study the impact of cosmic variance and observational uncertainties in constraining the mass and occupation fraction, $f_{\\rm occ}$, of dark matter halos hosting Ly-$\\alpha$ Emitting Galaxies (LAEs) at high redshift. To this end, we construct mock catalogs from an N-body simulation to match the typical size of observed fields at $z=3.1$ ($\\sim 1 {\\rm deg^2}$). In our model a dark matter halo with mass in the range $M_{\\rm min}minimum and maximum masses in our model span a wide range $10^{10.0}h^{-1}{\\rm{M_{\\odot}}}\\leq M_{\\rm min} \\leq 10^{11.1}h^{-1}{\\rm{M_{\\odot}}}$ , $10^{11.0}h^{-1}{\\rm{M_{\\odot}}}\\leq M_{\\rm max} \\leq 10^{13.0}h^{-1}{\\rm{M_{\\od...

  9. The minimum wage and restaurant prices

    OpenAIRE

    Daniel Aaronson; Eric French; MacDonald, James M.

    2004-01-01

    Using both store-level and aggregated price data from the food away from home component of the Consumer Price Index survey, we show that restaurant prices rise in response to an increase in the minimum wage. These results hold up when using several different sources of variation in the data. We interpret these findings within a model of employment determination. The model implies that minimum wage hikes cause employment to fall and prices to rise if labor markets are competitive but potential...

  10. Minimum Dominating Tree Problem for Graphs

    Institute of Scientific and Technical Information of China (English)

    LIN Hao; LIN Lan

    2014-01-01

    A dominating tree T of a graph G is a subtree of G which contains at least one neighbor of each vertex of G. The minimum dominating tree problem is to find a dominating tree of G with minimum number of vertices, which is an NP-hard problem. This paper studies some polynomially solvable cases, including interval graphs, Halin graphs, special outer-planar graphs and others.

  11. Broadband negative refractive index obtained by plasmonic hybridization in metamaterials

    Science.gov (United States)

    Nguyen, Hien T.; Bui, Tung S.; Yan, Sen; Vandenbosch, Guy A. E.; Lievens, Peter; Vu, Lam D.; Janssens, Ewald

    2016-11-01

    We experimentally demonstrate a broadband negative refractive index (NRI) behavior in combined dimer and fishnet dimer metamaterials operating in the GHz frequency range. The observations can be well explained by a hybridization model and are in agreement with numerical modelling results. Hybridization of the magnetic resonances is obtained by reducing the distance between the layers in the dimer structures. A ratio of the double negative refractive index bandwidth to operational frequency of approximately 10% was achieved in the fishnet dimer. The applicable frequency range of the broadband NRI was shown to scale with the size of the structures from the microwave to the far infrared.

  12. Symmetry-Breaking Plasmonic Metasurfaces for Broadband Light Bending

    DEFF Research Database (Denmark)

    Ni, Xingjie; Emani, Naresh K.; Kildishev, Alexander V.;

    2012-01-01

    We experimentally demonstrate unparalleled wave-front control in a broadband, optical wavelength range from 1.0 μm to 1.9 μm, using a thin plasmonic layer (metasurface) consisting of a nanoantenna array that breaks the symmetry along the interface.......We experimentally demonstrate unparalleled wave-front control in a broadband, optical wavelength range from 1.0 μm to 1.9 μm, using a thin plasmonic layer (metasurface) consisting of a nanoantenna array that breaks the symmetry along the interface....

  13. The Quest for Ultimate Broadband High Power Microwaves

    CERN Document Server

    Podgorski, Andrew S

    2014-01-01

    Paper describes High Power Microwave research of combining GW peak power to achieve MV/m and GV/m radiated fields in 1 to 500 GHz band. To achieve such fields multiple independently triggered broadband GW sources, supplying power to multiple spatially distributed broadband radiators/antennas are used. Single TW array is used as an ultimate microwave weapon in 1 to 5 GHz range while multiple TW arrays provide GV/m radiating field at plasma frequencies in 300 GHz range leading to fusion power.

  14. Broadband Coherent Enhancement of Transmission and Absorption in Disordered Media

    CERN Document Server

    Hsu, Chia Wei; Bromberg, Yaron; Stone, A Douglas; Cao, Hui

    2015-01-01

    We study the optimal diffusive transmission and absorption of broadband or polychromatic light in a disordered medium. By introducing matrices describing broadband transmission and reflection, we formulate an extremal eigenvalue problem where the optimal input wavefront is given by the corresponding eigenvector. We show analytically that a single wavefront can exhibit strongly enhanced total transmission or total absorption across a bandwidth that is orders of magnitude broader than the spectral correlation width of the medium, due to long-range correlations in coherent diffusion. We find excellent agreement between the analytic theory and numerical simulations.

  15. Rapid detection of arsenic minerals using portable broadband NQR

    Science.gov (United States)

    Lehmann-Horn, J. A.; Miljak, D. G.; O'Dell, L. A.; Yong, R.; Bastow, T. J.

    2014-10-01

    The remote real-time detection of specific arsenic species would significantly benefit in minerals processing to mitigate the release of arsenic into aquatic environments and aid in selective mining. At present, there are no technologies available to detect arsenic minerals in bulk volumes outside of laboratories. Here we report on the first room-temperature broadband 75As nuclear quadrupole resonance (NQR) detection of common and abundant arsenic ores in the Earth crust using a large sample (0.78 L) volume prototype sensor. Broadband excitation aids in detection of natural minerals with low crystallinity. We briefly discuss how the proposed NQR detector could be employed in mining operations.

  16. Broadband electrical impedance matching for piezoelectric ultrasound transducers.

    Science.gov (United States)

    Huang, Haiying; Paramo, Daniel

    2011-12-01

    This paper presents a systematic method for designing broadband electrical impedance matching networks for piezoelectric ultrasound transducers. The design process involves three steps: 1) determine the equivalent circuit of the unmatched piezoelectric transducer based on its measured admittance; 2) design a set of impedance matching networks using a computerized Smith chart; and 3) establish the simulation model of the matched transducer to evaluate the gain and bandwidth of the impedance matching networks. The effectiveness of the presented approach is demonstrated through the design, implementation, and characterization of impedance matching networks for a broadband acoustic emission sensor. The impedance matching network improved the power of the acquired signal by 9 times.

  17. Midinfrared broadband achromatic astronomical beam combiner for nulling interferometry.

    Science.gov (United States)

    Hsiao, Hsien-kai; Winick, Kim A; Monnier, John D

    2010-12-10

    Integrated optic beam combiners offer many advantages over conventional bulk optic implementations for astronomical imaging. To our knowledge, integrated optic beam combiners have only been demonstrated at operating wavelengths below 4 μm. Operation in the midinfrared wavelength region, however, is highly desirable. In this paper, a theoretical design technique based on three coupled waveguides is developed to achieve fully achromatic, broadband, polarization-insensitive, lossless beam combining. This design may make it possible to achieve the very deep broadband nulls needed for exoplanet searching.

  18. Semiconductor Quantum Dash Broadband Emitters: Modeling and Experiments

    KAUST Repository

    Khan, Mohammed Zahed Mustafa

    2013-10-01

    Broadband light emitters operation, which covers multiple wavelengths of the electromagnetic spectrum, has been established as an indispensable element to the human kind, continuously advancing the living standard by serving as sources in important multi-disciplinary field applications such as biomedical imaging and sensing, general lighting and internet and mobile phone connectivity. In general, most commercial broadband light sources relies on complex systems for broadband light generation which are bulky, and energy hungry. \\tRecent demonstration of ultra-broadband emission from semiconductor light sources in the form of superluminescent light emitting diodes (SLDs) has paved way in realization of broadband emitters on a completely novel platform, which offered compactness, cost effectiveness, and comparatively energy efficient, and are already serving as a key component in medical imaging systems. The low power-bandwidth product is inherent in SLDs operating in the amplified spontaneous emission regime. A quantum leap in the advancement of broadband emitters, in which high power and large bandwidth (in tens of nm) are in demand. Recently, the birth of a new class of broadband semiconductor laser diode (LDs) producing multiple wavelength light in stimulated emission regime was demonstrated. This very recent manifestation of a high power-bandwidth-product semiconductor broadband LDs relies on interband optical transitions via quantum confined dot/dash nanostructures and exploiting the natural inhomogeneity of the self-assembled growth technology. This concept is highly interesting and extending the broad spectrum of stimulated emission by novel device design forms the central focus of this dissertation. \\tIn this work, a simple rate equation numerical technique for modeling InAs/InP quantum dash laser incorporating the properties of inhomogeneous broadening effect on lasing spectra was developed and discussed, followed by a comprehensive experimental analysis

  19. A novel structure for a broadband left-handed metamaterial

    Institute of Scientific and Technical Information of China (English)

    Xiong Han; Hong Jing-Song; Jin Da-Lin; Zhang Zhi-Min

    2012-01-01

    A low absorptivity broadband negative refractive index metamaterial with a multi-gap split-ring and metallic cross (MSMC) structure is proposed and investigated numerically and experimentally in the microwave frequency range.The effective media parameters were retrieved from the numerical and experimental results,which clearly show that there exists a very wide frequency band where the permittivity and permeability are negative.The influence of the structure parameters on the magnetic response and the cut-off frequency of the negative permittivity are studied in detail.This metamaterial would have potential application in designing broadband microwave devices.

  20. Broadband high reflectivity in subwavelength-grating slab waveguides

    CERN Document Server

    Cui, Xuan; Zhou, Zhongxiang

    2015-01-01

    We computationally study a subwavelength dielectric grating structure, show that slab waveguide modes can be used to obtain broadband high reflectivity, and analyze how slab waveguide modes influence reflection. A structure showing interference between Fabry-Perot modes, slab waveguide modes, and waveguide array modes is designed with ultra-broadband high reflectivity. Owing to the coupling of guided modes, the region with reflectivity R > 0.99 has an ultra-high bandwidth ( {\\Delta}f/f > 30%). The incident-angle region with R > 0.99 extends over a range greater than 40{\\deg}. Moreover, an asymmetric waveguide structure is studied using a semiconductor substrate.