Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation
Directory of Open Access Journals (Sweden)
Namyong Kim
2016-06-01
Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.
Evaluation of ITER MSE Viewing Optics
International Nuclear Information System (INIS)
Allen, S; Lerner, S; Morris, K; Jayakumar, J; Holcomb, C; Makowski, M; Latkowski, J; Chipman, R
2007-01-01
The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on the design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate
Minimum Energy Decentralized Estimation in a Wireless Sensor Network with Correlated Sensor Noises
Directory of Open Access Journals (Sweden)
Krasnopeev Alexey
2005-01-01
Full Text Available Consider the problem of estimating an unknown parameter by a sensor network with a fusion center (FC. Sensor observations are corrupted by additive noises with an arbitrary spatial correlation. Due to bandwidth and energy limitation, each sensor is only able to transmit a finite number of bits to the FC, while the latter must combine the received bits to estimate the unknown parameter. We require the decentralized estimator to have a mean-squared error (MSE that is within a constant factor to that of the best linear unbiased estimator (BLUE. We minimize the total sensor transmitted energy by selecting sensor quantization levels using the knowledge of noise covariance matrix while meeting the target MSE requirement. Computer simulations show that our designs can achieve energy savings up to 70 % when compared to the uniform quantization strategy whereby each sensor generates the same number of bits, irrespective of the quality of its observation and the condition of its channel to the FC.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
Directory of Open Access Journals (Sweden)
Sekhar S Chandra
2004-01-01
Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig; Al-Naffouri, Tareq Y.
2015-01-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling
Energy Technology Data Exchange (ETDEWEB)
Hamrick, Todd [West Virginia Univ., Morgantown, WV (United States)
2011-01-01
Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to compute the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.
Proportionate Minimum Error Entropy Algorithm for Sparse System Identification
Directory of Open Access Journals (Sweden)
Zongze Wu
2015-08-01
Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.
Minimum K-S estimator using PH-transform technique
Directory of Open Access Journals (Sweden)
Somchit Boonthiem
2016-07-01
Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged
MHD marking using the MSE polarimeter optics in ILW JET plasmas
Reyes Cortes, S.; Alves, D.; Baruzzo, M.; Bernardo, J.; Buratti, P.; Coelho, R.; Challis, C.; Chapman, I.; Hawkes, N.; Hender, T.C.; Hobirk, J.; Joffrin, E.
2016-01-01
In this communication we propose a novel diagnostic technique, which uses the collection optics of the JET Motional Stark Effect (MSE) diagnostic, to perform polarimetry marking of observed MHD in high temperature plasma regimes. To introduce the technique, first we will present measurements of the coherence between MSE polarimeter, electron cyclotron emission, and Mirnov coil signals aiming to show the feasibility of the method. The next step consists of measuring the amplitude fluctuation of the raw MSE polarimeter signals, for each MSE channel, following carefully the MHD frequency on Mirnov coil data spectrograms. A variety of experimental examples in JET ITER-Like Wall (ILW) plasmas are presented, providing an adequate picture and interpretation for the MSE optics polarimeter technique.
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2013-11-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
VOLATILITY AND KURTOSIS OF DAILY STOCK RETURNS AT MSE
Directory of Open Access Journals (Sweden)
Zoran Ivanovski
2015-12-01
Full Text Available Prominent financial stock pricing models are built on assumption that asset returns follow a normal (Gaussian distribution. However, many authors argue that in the practice stock returns are often characterized by skewness and kurtosis, so we test the existence of the Gaussian distribution of stock returns and calculate the kurtosis of several stocks at the Macedonian Stock Exchange (MSE. Obtaining information about the shape of distribution is an important step for models of pricing risky assets. The daily stock returns at Macedonian Stock Exchange (MSE are characterized by high volatility and non-Gaussian behaviors as well as they are extremely leptokurtic. The analysis of MSE time series stock returns determine volatility clustering and high kurtosis. The fact that daily stock returns at MSE are not normally distributed put into doubt results that rely heavily on this assumption and have significant implications for portfolio management. We consider this stock market as good representatives of emerging markets. Therefore, we argue that our results are valid for other similar emerging stock markets.
A survey on OFDM channel estimation techniques based on denoising strategies
Directory of Open Access Journals (Sweden)
Pallaviram Sure
2017-04-01
Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.
International Nuclear Information System (INIS)
Yokoyama, Seiya; Yonezawa, Suguru; Kitamoto, Sho; Yamada, Norishige; Houjou, Izumi; Sugai, Tamotsu; Nakamura, Shin-ichi; Arisaka, Yoshifumi; Takaori, Kyoichi; Higashi, Michiyo
2012-01-01
Methylation of CpG sites in genomic DNA plays an important role in gene regulation and especially in gene silencing. We have reported mechanisms of epigenetic regulation for expression of mucins, which are markers of malignancy potential and early detection of human neoplasms. Epigenetic changes in promoter regions appear to be the first step in expression of mucins. Thus, detection of promoter methylation status is important for early diagnosis of cancer, monitoring of tumor behavior, and evaluating the response of tumors to targeted therapy. However, conventional analytical methods for DNA methylation require a large amount of DNA and have low sensitivity. Here, we report a modified version of the bisulfite-DGGE (denaturing gradient gel electrophoresis) using a nested PCR approach. We designated this method as methylation specific electrophoresis (MSE). The MSE method is comprised of the following steps: (a) bisulfite treatment of genomic DNA, (b) amplification of the target DNA by a nested PCR approach and (c) applying to DGGE. To examine whether the MSE method is able to analyze DNA methylation of mucin genes in various samples, we apply it to DNA obtained from state cell lines, ethanol-fixed colonic crypts and human pancreatic juices. The MSE method greatly decreases the amount of input DNA. The lower detection limit for distinguishing different methylation status is < 0.1% and the detectable minimum amount of DNA is 20 pg, which can be obtained from only a few cells. We also show that MSE can be used for analysis of challenging samples such as human isolated colonic crypts or human pancreatic juices, from which only a small amount of DNA can be extracted. The MSE method can provide a qualitative information of methylated sequence profile. The MSE method allows sensitive and specific analysis of the DNA methylation pattern of almost any block of multiple CpG sites. The MSE method can be applied to analysis of DNA methylation status in many different clinical
Assessing corrosion of MSE wall reinforcement.
2010-09-01
The primary objective of this study was to extract reinforcement coupons from select MSE walls and document the extent of corrosion. In doing this, a baseline has been established against which coupons extracted in the future can be compared. A secon...
Globally optimal superconducting magnets part II: symmetric MSE coil arrangement.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
A globally optimal superconducting magnet coil design procedure based on the Minimum Stored Energy (MSE) current density map is outlined. The method has the ability to arrange coils in a manner that generates a strong and homogeneous axial magnetic field over a predefined region, and ensures the stray field external to the assembly and peak magnetic field at the wires are in acceptable ranges. The outlined strategy of allocating coils within a given domain suggests that coils should be placed around the perimeter of the domain with adjacent coils possessing alternating winding directions for optimum performance. The underlying current density maps from which the coils themselves are derived are unique, and optimized to possess minimal stored energy. Therefore, the method produces magnet designs with the lowest possible overall stored energy. Optimal coil layouts are provided for unshielded and shielded short bore symmetric superconducting magnets.
Minimum Distance Estimation on Time Series Analysis With Little Data
National Research Council Canada - National Science Library
Tekin, Hakan
2001-01-01
.... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...
The multi-spectral line-polarization MSE system on Alcator C-Mod
Energy Technology Data Exchange (ETDEWEB)
Mumgaard, R. T., E-mail: mumgaard@psfc.mit.edu; Khoury, M. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Scott, S. D. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540 (United States)
2016-11-15
A multi-spectral line-polarization motional Stark effect (MSE-MSLP) diagnostic has been developed for the Alcator C-Mod tokamak wherein the Stokes vector is measured in multiple wavelength bands simultaneously on the same sightline to enable better polarized background subtraction. A ten-sightline, four wavelength MSE-MSLP detector system was designed, constructed, and qualified. This system consists of a high-throughput polychromator for each sightline designed to provide large étendue and precise spectral filtering in a cost-effective manner. Each polychromator utilizes four narrow bandpass interference filters and four custom large diameter avalanche photodiode detectors. Two filters collect light to the red and blue of the MSE emission spectrum while the remaining two filters collect the beam pi and sigma emission generated at the same viewing volume. The filter wavelengths are temperature tuned using custom ovens in an automated manner. All system functions are remote controllable and the system can be easily retrofitted to existing single-wavelength line-polarization MSE systems.
The multi-spectral line-polarization MSE system on Alcator C-Mod
International Nuclear Information System (INIS)
Mumgaard, R. T.; Khoury, M.; Scott, S. D.
2016-01-01
A multi-spectral line-polarization motional Stark effect (MSE-MSLP) diagnostic has been developed for the Alcator C-Mod tokamak wherein the Stokes vector is measured in multiple wavelength bands simultaneously on the same sightline to enable better polarized background subtraction. A ten-sightline, four wavelength MSE-MSLP detector system was designed, constructed, and qualified. This system consists of a high-throughput polychromator for each sightline designed to provide large étendue and precise spectral filtering in a cost-effective manner. Each polychromator utilizes four narrow bandpass interference filters and four custom large diameter avalanche photodiode detectors. Two filters collect light to the red and blue of the MSE emission spectrum while the remaining two filters collect the beam pi and sigma emission generated at the same viewing volume. The filter wavelengths are temperature tuned using custom ovens in an automated manner. All system functions are remote controllable and the system can be easily retrofitted to existing single-wavelength line-polarization MSE systems.
Multi-channel PSD Estimators for Speech Dereverberation
DEFF Research Database (Denmark)
Kuklasinski, Adam; Doclo, Simon; Gerkmann, Timo
2015-01-01
densities (PSDs). We first derive closed-form expressions for the mean square error (MSE) of both PSD estimators and then show that one estimator – previously used for speech dereverberation by the authors – always yields a better MSE. Only in the case of a two microphone array or for special spatial...... distributions of the interference both estimators yield the same MSE. The theoretically derived MSE values are in good agreement with numerical simulation results and with instrumental speech quality measures in a realistic speech dereverberation task for binaural hearing aids....
Progress on the MSE diagnostic for ITER
International Nuclear Information System (INIS)
Lotte, Ph.; Giannella, R.; Von Hellermann, M.; Kuldkepp, M.; Rachlew, E.; Malaquias, A.; Costley, A.; Walker, C.
2004-01-01
The Motional Stark Effect (MSE) diagnostic is now considered as an essential diagnostic for an accurate determination of current profiles in tokamak discharges. It mainly allows a measurement of the direction of the total magnetic field, a very powerful constraint for the determination of the safety factor profile. The realisation of such a diagnostic on ITER implies to face new challenges, because of the bigger size of the machine and of its hard environment. Now, most of the foreseen difficulties have been examined, solutions envisaged, and we propose to review them in this paper. This article is divided into 3 parts: 1) principle of the MSE diagnostic and its feasibility at higher Lorentz electric fields, 2) spatial and time resolution of the diagnostic, and 3) the light collection system
Interaction between drilled shaft and mechanically stabilized earth (MSE) wall : project summary.
2015-08-31
Drilled shafts are being constructed within the reinforced zone of mechanically stabilized earth (MSE) walls (Figure 1). The drilled shafts may be subjected to horizontal loads and push against the front of the wall. Distress of MSE wall panels has b...
Asymptotics for the minimum covariance determinant estimator
Butler, R.W.; Davies, P.L.; Jhun, M.
1993-01-01
Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown
A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection
Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu
2010-01-01
A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.
A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection
Park, Chiwoo
2010-10-01
A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.
Networks, Micro Small Enterprises (MSE'S) and Performance: the ...
African Journals Online (AJOL)
Networks, Micro Small Enterprises (MSE'S) and Performance: the Case of Kenya. ... It adopts the network perspective theoretical approach. Empirically, the ... entrepreneurial personal network as a copying strategy in the process of global
The effect of correlated observations on the performance of distributed estimation
Ahmed, Mohammed
2013-12-01
Estimating unknown signal in Wireless Sensor Networks (WSNs) requires sensor nodes to transmit their observations of the signal over a multiple access channel to a Fusion Center (FC). The FC uses the received observations, which is corrupted by observation noise and both channel fading and noise, to find the minimum Mean Square Error (MSE) estimate of the signal. In this paper, we investigate the effect of the source-node correlation (the correlation between sensor node observations and the source signal) and the inter-node correlation (the correlation between sensor node observations) on the performance of the Linear Minimum Mean Square Error (LMMSE) estimator for three correlation models in the presence of channel fading. First, we investigate the asymptotic behavior of the achieved distortion (i.e., MSE) resulting from both the observation and channel noise in a non-fading channel. Then, the effect of channel fading is considered and the corresponding distortion outage probability, the probability that the distortion exceeds a certain value, is found. By representing the distortion as a ratio of indefinite quadratic forms, a closed-form expression is derived for the outage probability that shows its dependency on the correlation. Finally, the new representation of the outage probability allows us to propose an iterative solution for the power allocation problem to minimize the outage probability under total and individual power constraints. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.
Intra-shot MSE Calibration Technique For LHCD Experiments
International Nuclear Information System (INIS)
Ko, Jinseok; Scott, Steve; Shiraiwa, Syun'ichi; Greenwald, Martin; Parker, Ronald; Wallace, Gregory
2009-01-01
The spurious drift in pitch angle of order several degrees measured by the Motional Stark Effect (MSE) diagnostic in the Alcator C-Mod tokamak1 over the course of an experimental run day has precluded direct utilization of independent absolute calibrations. Recently, the underlying cause of the drift has been identified as thermal stress-induced birefringence in a set of in-vessel lenses. The shot-to-shot drift can be avoided by using MSE to measure only the change in pitch angle between a reference phase and a phase of physical interest within a single plasma discharge. This intra-shot calibration technique has been applied to the Lower Hybrid Current Drive (LHCD) experiments and the measured current profiles qualitatively demonstrate several predictions of LHCD theory such as an inverse dependence of current drive efficiency on the parallel refractive index and the presence of off-axis current drive.
Directory of Open Access Journals (Sweden)
G. R. Pasha
2006-07-01
Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.
Usage-Centered Design Approach in Design of Malaysia Sexuality Education (MSE) Courseware
Chan, S. L.; Jaafar, A.
The problems amongst juveniles increased every year, especially rape case of minor. Therefore, the government of Malaysia has introduced the National Sexuality Education Guideline on 2005. An early study related to the perception of teachers and students toward the sexuality education curriculum taught in secondary schools currently was carried out in 2008. The study showed that there are big gaps between the perception of the teachers and the students towards several issues of Malaysia sexuality education today. The Malaysia Sexuality Education (MSE) courseware was designed based on few learning theories approach. Then MSE was executed through a comprehensive methodology which the model ADDIE integrated with Usage-Centered Design to achieve high usability courseware. In conclusion, the effort of developing the MSE is hopefully will be a solution to the current problem that happens in Malaysia sexuality education now.
Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features
DEFF Research Database (Denmark)
Jensen, Jesper; Tan, Zheng-Hua
2015-01-01
In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....
Case study: highly loaded MSE bridge supporting structure, Syncrude NMAPS conveyor overpasses
Energy Technology Data Exchange (ETDEWEB)
Scherger, B.; Brockbank, B. [Reinforced Earth Company Ltd., Edmonton, AB (Canada); Mimura, W. [Syncrude Canada Ltd., Edmonton, AB (Canada)
2005-07-01
A crusher and conveyor system was constructed at the Mildred Lake Oil Sands Mine near Fort McMurray, Alberta in order to facilitate ore delivery from Syncrude's North Mine. As part of this North Mine Auxiliary Production System (NMAPS), Syncrude Canada and their consultant Cosyn Technology identified the need for 3 overpasses over conveyors in the North Mine in order to provide unrestricted crossing over the operating conveyor system for the heavy hauler trucks and light vehicle mine traffic. The overpasses were designed to support the dead load of the granular fill and the live load of two loaded heavy hauler trucks, with a design load for each loaded hauler of 670 900 kg. This paper reviewed various aspects of the design from planning, structure selection, and overall stability and bearing capacity considerations. The different designs in the 3 new overpasses accommodated foundation and loading requirements. The designs ranged from the use of precast one-piece reinforced concrete arches, Mechanically Stabilized Earth (MSE) bridge abutment technology, and a combination of the two. The MSE retaining walls directly supported the bridge superstructure without the use of piles or other deep structural foundations. The design was challenging because of the significant vertical stresses transferred onto the wall. All 3 overpasses also used MSE walls for the supporting end wing walls. The main focus of this paper was on the heavily loaded MSE walls supporting the bridge abutment style overpasses. This structure has illustrated the capability of properly designed MSE wall structures with steel soil reinforcement and reinforced precast concrete face panels to successfully carry bridge footing pressure loadings up to 545 kPa. It was concluded that this case has good potential for use in future bridge projects in both the industrial and highway sectors. 2 refs., 7 figs.
Distortion-Rate Bounds for Distributed Estimation Using Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Nihar Jindal
2008-03-01
Full Text Available We deal with centralized and distributed rate-constrained estimation of random signal vectors performed using a network of wireless sensors (encoders communicating with a fusion center (decoder. For this context, we determine lower and upper bounds on the corresponding distortion-rate (D-R function. The nonachievable lower bound is obtained by considering centralized estimation with a single-sensor which has all observation data available, and by determining the associated D-R function in closed-form. Interestingly, this D-R function can be achieved using an estimate first compress afterwards (EC approach, where the sensor (i forms the minimum mean-square error (MMSE estimate for the signal of interest; and (ii optimally (in the MSE sense compresses and transmits it to the FC that reconstructs it. We further derive a novel alternating scheme to numerically determine an achievable upper bound of the D-R function for general distributed estimation using multiple sensors. The proposed algorithm tackles an analytically intractable minimization problem, while it accounts for sensor data correlations. The obtained upper bound is tighter than the one determined by having each sensor performing MSE optimal encoding independently of the others. Numerical examples indicate that the algorithm performs well and yields D-R upper bounds which are relatively tight with respect to analytical alternatives obtained without taking into account the cross-correlations among sensor data.
An Estimator of Heavy Tail Index through the Generalized Jackknife Methodology
Directory of Open Access Journals (Sweden)
Weiqi Liu
2014-01-01
Full Text Available In practice, sometimes the data can be divided into several blocks but only a few of the largest observations within each block are available to estimate the heavy tail index. To address this problem, we propose a new class of estimators through the Generalized Jackknife methodology based on Qi’s estimator (2010. These estimators are proved to be asymptotically normal under suitable conditions. Compared to Hill’s estimator and Qi’s estimator, our new estimator has better asymptotic efficiency in terms of the minimum mean squared error, for a wide range of the second order shape parameters. For the finite samples, our new estimator still compares favorably to Hill’s estimator and Qi’s estimator, providing stable sample paths as a function of the number of dividing the sample into blocks, smaller estimation bias, and MSE.
Analysis of the water dynamics for the MSE-COIL and theMST-COIL
Massidda, L; Kadi, Y; Balhan, B
2005-01-01
In this report, we present the technical specification for the numerical model and the study of the acoustic wave propagation in the water tubes of the extraction septum magnet (MSE) and the thin magnetic septum (MST) in the event of an asynchronous firing of the extraction kickers (MKE). The deposited energy densities, estimated by the high-energy particle transport code FLUKA, were converted to internal heat generation rates according to the time dependence of the extracted beam. The transient response to this thermal load was obtained by simulating power deposition and acoustic wave propagation by the spectral-element code ELSE.
Analytical and Numerical Evaluation of Limit States of MSE Wall Structure
Directory of Open Access Journals (Sweden)
Drusa Marián
2016-12-01
Full Text Available Simplification of the design of Mechanically Stabilized Earth wall structures (MSE wall or MSEW is now an important factor that helps us not only to save a time and costs, but also to achieve the desired results more reliably. It is quite common way in practice, that the designer of a section of motorway or railway line gives order for design to a supplier of geosynthetics materials. However, supplier company has experience and skills, but a general designer does not review the safety level of design and its efficiency, and is simply incorporating into the overall design of the construction project. Actually, large number of analytical computational methods for analysis and design of MSE walls or similar structures are known. The problem of these analytical methods is the verification of deformations and global stability of structure. The article aims to clarify two methods of calculating the internal stability of MSE wall and their comparison with FEM numerical model. Comparison of design approaches allows us to draft an effective retaining wall and tells us about the appropriateness of using a reinforcing element.
Lateral resistance of piles near vertical MSE abutment walls.
2013-03-01
Full scale lateral load tests were performed on eight piles located at various distances behind MSE walls. The objective of the testing was to determine the effect of spacing from the wall on the lateral resistance of the piles and on the force induc...
Performance assessment of MSE abutment walls in Indiana : final report.
2017-05-01
This report presents a numerical investigation of the behavior of steel strip-reinforced mechanically stabilized earth (MSE) direct bridge abutments under static loading. Finite element simulations were performed using an advanced two-surface boundin...
Modeling and analysis to quantify MSE wall behavior and performance.
2009-08-01
To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...
Estimation of Minimum DNBR Using Cascaded Fuzzy Neural Networks
International Nuclear Information System (INIS)
Kim, Dong Yeong; Yoo, Kwae Hwan; Na, Man Gyun
2015-01-01
This phenomenon of boiling crisis is called a departure from nucleate boiling (DNB). The DNB phenomena can influence the fuel cladding and fuel pellets. The DNB ratio (DNBR) is defined as the ratio of the expected DNB heat flux to the actual fuel rod heat flux. Since it is very important to monitor and predict the minimum DNBR in a reactor core to prevent the boiling crisis and clad melting, a number of researches have been conducted to predict DNBR values. The aim of this study is to estimate the minimum DNBR in a reactor core using the measured signals of the reactor coolant system (RCS) by applying cascaded fuzzy neural networks (CFNN) according to operating conditions. Reactor core monitoring and protection systems require minimum DNBR prediction. The CFNN can be used to optimize the minimum DNBR value through the process of adding fuzzy neural networks (FNN) repeatedly. The proposed algorithm is trained by using the data set prepared for training (development data) and verified by using another data set different (independent) from the development data. The developed CFNN models were applied to the first fuel cycle of OPR1000. The RMS errors are 0.23% and 0.12% for the positive and negative ASI, respectively
Directory of Open Access Journals (Sweden)
Joseph Kwon
2010-03-01
Full Text Available Proteomics work resembles the search for a needle in a haystack. The identification of protein biomarker requires the removal of the false protein data from the whole protein mixture. For high quality proteomic data, even a strict filtration step using the false discovery rate (FDR is insufficient for obtaining perfect protein information from the biological samples. In this study, the cyanobacterial whole membrane fraction was applied to the data-dependent analysis (DDA mode of LC-MS/MS, which was used along with the data-independent LC-MSE technique in order to evaluate the membrane proteomic data. Furthermore, the identified MSE-information (MSE-i data based on the peptide mass and the retention time were validated by the other database search, i.e., the probability-based MASCOT and de novo search engine PEAKS. In this present study, 208 cyanobacterial proteins with FDR of 5% were identified using the data-independent nano-UPLC/MSE acquisition with the Protein Lynx Global Server (PLGS, and 56 of these proteins were the predicted membrane proteins. When a total of 208 MSE-i proteomic data were applied to the DDA mode of LC-MS/MS, the number of identified membrane proteins was 26 and 33 from MASCOT and PEAKS with a FDR of 5%, respectively. The number of totally overlapped membrane proteins was 25. Therefore, the data-independent LC-MSE identified more proteins with a high confidence.
International Nuclear Information System (INIS)
Zarnstorff, M.C.; Synakowski, E.J.
1996-10-01
Previous analysis of motional-Stark Effect (MSE) data to measure the q-profile ignored contributions from the plasma electric field. The MSE measurements are shown to be sensitive to the electric field and require significant corrections for plasmas with large rotation velocities or pressure gradients. MSE measurements from rotating plasmas on the Tokamak Fusion Test Reactor (TFTR) confirm the significance of these corrections and verify their magnitude. Several attractive configurations are considered for future MSE-based diagnostics for measuring the plasma radial electric field. MSE data from TFTR is analyzed to determine the change in the radial electric field between two plasmas. The measured electric field quantitatively agrees with the predictions of neoclassical theory. These results confirm the utility of a MSE electric field measurement
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
DEFF Research Database (Denmark)
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
time of arrival 3-d position estimation using minimum ads-b receiver ...
African Journals Online (AJOL)
HOD
The location from which a signal is transmitted can be estimated using the time it takes to be detected at a receiver. The difference between transmission time and the detection time is known as time of arrival (TOA). In this work, an algorithm for 3-dimensional (3-D) position estimation (PE) of an emitter using the minimum ...
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Directory of Open Access Journals (Sweden)
Milinkovitch Michel C
2007-11-01
Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.
Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.
Huang, Sheng-Juan; Yang, Guang-Hong
2017-09-01
This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An approximate method to estimate the minimum critical mass of fissile nuclides
International Nuclear Information System (INIS)
Wright, R.Q.; Jordan, W.C.
1999-01-01
When evaluating systems in criticality safety, it is important to approximate the answer before any analysis is performed. There is currently interest in establishing the minimum critical parameters for fissile actinides. The purpose is to describe the OB-1 method for estimating the minimum critical mass for thermal systems based on one-group calculations and 235 U spheres fully reflected by water. The observation is made that for water-moderated, well-thermalized systems, the transport and leakage from the system are dominated by water. Under these conditions two fissile mixtures will have nearly the same critical volume provided the infinite media multiplication factor (k ∞ ) for the two systems is the same. This observation allows for very simple estimates of critical concentration and mass as a function of the hydrogen-to-fissile (H/X) moderation ratio by comparison to the known 235 U system
Minimum variance linear unbiased estimators of loss and inventory
International Nuclear Information System (INIS)
Stewart, K.B.
1977-01-01
The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs
Evaluation of geofabric in undercut on MSE wall stability : executive summary report.
2011-05-01
Compaction of granular base materials at sites with fine grained native soils often causes unwanted material loss due to penetration. In 2007, ODOT began placing geofabrics in the undercut of MSE walls at the soil/ granular material interface to faci...
Analysis of Beamformer Directed Single-Channel Noise Reduction System for Hearing Aid Applications
DEFF Research Database (Denmark)
Jensen, Jesper; Pedersen, Michael Syskind
2015-01-01
We study multi-microphone noise reduction systems consisting of a beamformer and a single-channel (SC) noise reduction stage. In particular, we present and analyse a maximum likelihood (ML) method for jointly estimating the target and noise power spectral densities (psd's) entering the SC filter....... We show that the estimators are minimum variance and unbiased, and provide closed-form expressions for their mean-square error (MSE). Furthermore, we show that the MSE of the noise psd estimator is particularly simple: it is independent of target signal characteristics, frequency, and microphone...
DEFF Research Database (Denmark)
Jensen, Jesper; Tan, Zheng-Hua
2014-01-01
We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...
Directory of Open Access Journals (Sweden)
Yokoyama Seiya
2012-02-01
Full Text Available Abstract Background Methylation of CpG sites in genomic DNA plays an important role in gene regulation and especially in gene silencing. We have reported mechanisms of epigenetic regulation for expression of mucins, which are markers of malignancy potential and early detection of human neoplasms. Epigenetic changes in promoter regions appear to be the first step in expression of mucins. Thus, detection of promoter methylation status is important for early diagnosis of cancer, monitoring of tumor behavior, and evaluating the response of tumors to targeted therapy. However, conventional analytical methods for DNA methylation require a large amount of DNA and have low sensitivity. Methods Here, we report a modified version of the bisulfite-DGGE (denaturing gradient gel electrophoresis using a nested PCR approach. We designated this method as methylation specific electrophoresis (MSE. The MSE method is comprised of the following steps: (a bisulfite treatment of genomic DNA, (b amplification of the target DNA by a nested PCR approach and (c applying to DGGE. To examine whether the MSE method is able to analyze DNA methylation of mucin genes in various samples, we apply it to DNA obtained from state cell lines, ethanol-fixed colonic crypts and human pancreatic juices. Result The MSE method greatly decreases the amount of input DNA. The lower detection limit for distinguishing different methylation status is Conclusions The MSE method can provide a qualitative information of methylated sequence profile. The MSE method allows sensitive and specific analysis of the DNA methylation pattern of almost any block of multiple CpG sites. The MSE method can be applied to analysis of DNA methylation status in many different clinical samples, and this may facilitate identification of new risk markers.
Estimation of daily minimum land surface air temperature using MODIS data in southern Iran
Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza
2017-11-01
Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.
Estimating minimum polycrystalline aggregate size for macroscopic material homogeneity
International Nuclear Information System (INIS)
Kovac, M.; Simonovski, I.; Cizelj, L.
2002-01-01
During severe accidents the pressure boundary of reactor coolant system can be subjected to extreme loadings, which might cause failure. Reliable estimation of the extreme deformations can be crucial to determine the consequences of severe accidents. Important drawback of classical continuum mechanics is idealization of inhomogenous microstructure of materials. Classical continuum mechanics therefore cannot predict accurately the differences between measured responses of specimens, which are different in size but geometrical similar (size effect). A numerical approach, which models elastic-plastic behavior on mesoscopic level, is proposed to estimate minimum size of polycrystalline aggregate above which it can be considered macroscopically homogeneous. The main idea is to divide continuum into a set of sub-continua. Analysis of macroscopic element is divided into modeling the random grain structure (using Voronoi tessellation and random orientation of crystal lattice) and calculation of strain/stress field. Finite element method is used to obtain numerical solutions of strain and stress fields. The analysis is limited to 2D models.(author)
Distributed Channel Estimation and Pilot Contamination Analysis for Massive MIMO-OFDM Systems
Zaib, Alam
2016-07-22
By virtue of large antenna arrays, massive MIMO systems have a potential to yield higher spectral and energy efficiency in comparison with the conventional MIMO systems. This paper addresses uplink channel estimation in massive MIMO-OFDM systems with frequency selective channels. We propose an efficient distributed minimum mean square error (MMSE) algorithm that can achieve near optimal channel estimates at low complexity by exploiting the strong spatial correlation among antenna array elements. The proposed method involves solving a reduced dimensional MMSE problem at each antenna followed by a repetitive sharing of information through collaboration among neighboring array elements. To further enhance the channel estimates and/or reduce the number of reserved pilot tones, we propose a data-aided estimation technique that relies on finding a set of most reliable data carriers. Furthermore, we use stochastic geometry to quantify the pilot contamination, and in turn use this information to analyze the effect of pilot contamination on channel MSE. The simulation results validate our analysis and show near optimal performance of the proposed estimation algorithms.
A Minimum Fuel Based Estimator for Maneuver and Natrual Dynamics Reconstruction
Lubey, D.; Scheeres, D.
2013-09-01
The vast and growing population of objects in Earth orbit (active and defunct spacecraft, orbital debris, etc.) offers many unique challenges when it comes to tracking these objects and associating the resulting observations. Complicating these challenges are the inaccurate natural dynamical models of these objects, the active maneuvers of spacecraft that deviate them from their ballistic trajectories, and the fact that spacecraft are tracked and operated by separate agencies. Maneuver detection and reconstruction algorithms can help with each of these issues by estimating mismodeled and unmodeled dynamics through indirect observation of spacecraft. It also helps to verify the associations made by an object correlation algorithm or aid in making those associations, which is essential when tracking objects in orbit. The algorithm developed in this study applies an Optimal Control Problem (OCP) Distance Metric approach to the problems of Maneuver Reconstruction and Dynamics Estimation. This was first developed by Holzinger, Scheeres, and Alfriend (2011), with a subsequent study by Singh, Horwood, and Poore (2012). This method estimates the minimum fuel control policy rather than the state as a typical Kalman Filter would. This difference ensures that the states are connected through a given dynamical model and allows for automatic covariance manipulation, which can help to prevent filter saturation. Using a string of measurements (either verified or hypothesized to correlate with one another), the algorithm outputs a corresponding string of adjoint and state estimates with associated noise. Post-processing techniques are implemented, which when applied to the adjoint estimates can remove noise and expose unmodeled maneuvers and mismodeled natural dynamics. Specifically, the estimated controls are used to determine spacecraft dependent accelerations (atmospheric drag and solar radiation pressure) using an adapted form of the Optimal Control based natural dynamics
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Weighted-MSE based on saliency map for assessing video quality of H.264 video streams
Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.
2011-01-01
Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.
Distributed estimation based on observations prediction in wireless sensor networks
Bouchoucha, Taha
2015-03-19
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist
DEFF Research Database (Denmark)
Wu, Xuan; Huang, Shoudao; Liu, Xiao
2017-01-01
This paper presents a new initial rotor position estimation method for an interior permanent magnet synchronous motor. The proposed method includes two steps: firstly, the minimum voltage vectors are injected to estimate the rotor position. Secondly, in order to identify the magnet polarity...
Performance of a New Restricted Biased Estimator in Logistic Regression
Directory of Open Access Journals (Sweden)
Yasin ASAR
2017-12-01
Full Text Available It is known that the variance of the maximum likelihood estimator (MLE inflates when the explanatory variables are correlated. This situation is called the multicollinearity problem. As a result, the estimations of the model may not be trustful. Therefore, this paper introduces a new restricted estimator (RLTE that may be applied to get rid of the multicollinearity when the parameters lie in some linear subspace in logistic regression. The mean squared errors (MSE and the matrix mean squared errors (MMSE of the estimators considered in this paper are given. A Monte Carlo experiment is designed to evaluate the performances of the proposed estimator, the restricted MLE (RMLE, MLE and Liu-type estimator (LTE. The criterion of performance is chosen to be MSE. Moreover, a real data example is presented. According to the results, proposed estimator has better performance than MLE, RMLE and LTE.
Directory of Open Access Journals (Sweden)
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
MSE measurements for sawtooth and non-inductive current drive studies in KSTAR
Ko, J.; Park, H.; Bea, Y. S.; Chung, J.; Jeon, Y. M.
2016-10-01
Two major topics where the measurement of the magnetic-field-line rotational transform profiles in toroidal plasma systems include the long-standing issue of complete versus incomplete reconnection model of the sawtooth instability and the issue with future reactor-relevant tokamak devices in which non-inductive steady state current sustainment is essential. The motional Stark effect (MSE) diagnostic based on the photoelastic-modulator (PEM) approach is one of the most reliable means to measure the internal magnetic pitch, and thus the rotational transform, or its reciprocal (q), profiles. The MSE system has been commissioned for the Korea Superconducting Tokamak Advanced Research (KSTAR) along with the development of various techniques to minimize systematic offset errors such as Faraday rotation and mis-alignment of the bandpass filters. The diagnostic has revealed the central q is well correlated with the sawtooth oscillation, maintaining its value above unity during the MHD quiescent period and that the response of the q profile to external current drive such as electron cyclotron wave injection not only involves the local change of the pitch angle gradient but also a significant shift of the magnetic topology due to the wave energy transport. Work supported by the Ministry of Science, ICT and Future Planning, Korea.
Weighted-noise threshold based channel estimation for OFDM ...
Indian Academy of Sciences (India)
Existing optimal time-domain thresholds exhibit suboptimal behavior for completely unavailable KCS ... Compared with no truncation case, truncation improved the MSE ... channel estimation errors has been studied. ...... Consumer Electron.
Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2013-01-01
Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.
Collier, Lesley; Jakob, Anke
2017-10-01
Multisensory environments (MSEs) for people with dementia have been available over 20 years but are used in an ad hoc manner using an eclectic range of equipment. Care homes have endeavored to utilize this approach but have struggled to find a design and approach that works for this setting. Study aims were to appraise the evolving concept of MSEs from a user perspective, to study the aesthetic and functional qualities, to identify barriers to staff engagement with a sensory environment approach, and to identify design criteria to improve the potential of MSE for people with dementia. Data were collected from 16 care homes with experience of MSE using ethnographic methods, incorporating semi-structured interviews, and observations of MSE design. Analysis was undertaken using descriptive statistics and thematic analysis. Observations revealed equipment that predominantly stimulated vision and touch. Thematic analysis of the semi-structured interviews revealed six themes: not knowing what to do in the room, good for people in the later stages of the disease, reduces anxiety, it's a good activity, design and setting up of the space, and including relatives and care staff. Few MSEs in care homes are designed to meet needs of people with dementia, and staff receive little training in how to facilitate sessions. As such, MSEs are often underused despite perceived benefits. Results of this study have been used to identify the design principles that have been reviewed by relevant stakeholders.
LENUS (Irish Health Repository)
O'Donnell, Brian D
2009-07-01
Ultrasound guidance facilitates precise needle and injectate placement, increasing axillary block success rates, reducing onset times, and permitting local anesthetic dose reduction. The minimum effective volume of local anesthetic in ultrasound-guided axillary brachial plexus block is unknown. The authors performed a study to estimate the minimum effective anesthetic volume of 2% lidocaine with 1:200,000 epinephrine (2% LidoEpi) in ultrasound-guided axillary brachial plexus block.
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
DYNAMIC PARAMETER ESTIMATION BASED ON MINIMUM CROSS-ENTROPY METHOD FOR COMBINING INFORMATION SOURCES
Czech Academy of Sciences Publication Activity Database
Sečkárová, Vladimíra
2015-01-01
Roč. 24, č. 5 (2015), s. 181-188 ISSN 0204-9805. [XVI-th International Summer Conference on Probability and Statistics (ISCPS-2014). Pomorie, 21.6.-29.6.2014] R&D Projects: GA ČR GA13-13502S Grant - others:GA UK(CZ) SVV 260225/2015 Institutional support: RVO:67985556 Keywords : minimum cross- entropy principle * Kullback-Leibler divergence * dynamic diffusion estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/AS/seckarova-0445817.pdf
Directory of Open Access Journals (Sweden)
Xingfeng Si
2014-05-01
Full Text Available Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period.
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
International Nuclear Information System (INIS)
Dumonteil, E.; Diop, C. M.
2009-01-01
This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
DEFF Research Database (Denmark)
Mardal, Marie; Dalsgaard, Petur Weihe; Qi, Bing
2018-01-01
metabolites of the synthetic cannabinoids, AMB-CHMICA and 5C-AKB48, using an in silico-assisted workflow with analytical data acquired using ultra-high-performance liquid chromatography–(ion mobility spectroscopy)–high resolution–mass spectrometry in data-independent acquisition mode (UHPLC......-(IMS)-HR-MSE). The metabolites were identified after incubation with rat and pooled human hepatocytes using UHPLC-HR-MSE, followed by UHPLC-IMS-HR-MSE. Metabolites of AMB-CHMICA and 5C-AKB48 were predicted with Meteor (Lhasa Ltd) and imported to the UNIFI software (Waters). The predicted metabolites were assigned to analytical...... components supported by the UNIFI in silico fragmentation tool. The main metabolic pathway of AMB-CHMICA was O-demethylation and hydroxylation of the methylhexyl moiety. For 5C-AKB48, the main metabolic pathways were hydroxylation(s) of the adamantyl moiety and oxidative dechlorination with subsequent...
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Wykes, T; Evans, J; Paton, C; Barnes, T R E; Taylor, D; Bentall, R; Dalton, B; Ruffell, T; Rose, D; Vitoratou, S
2017-10-01
Capturing service users' perspectives can highlight additional and different concerns to those of clinicians, but there are no up to date, self-report psychometrically sound measures of side effects of antipsychotic medications. Aim To develop a psychometrically sound measure to identify antipsychotic side effects important to service users, the Maudsley Side Effects (MSE) measure. An initial item bank was subjected to a Delphi exercise (n = 9) with psychiatrists and pharmacists, followed by service user focus groups and expert panels (n = 15) to determine item relevance and language. Feasibility and comprehensive psychometric properties were established in two samples (N43 and N50). We investigated whether we could predict the three most important side effects for individuals from their frequency, severity and life impact. MSE is a 53-item measure with good reliability and validity. Poorer mental and physical health, but not psychotic symptoms, was related to side-effect burden. Seventy-nine percent of items were chosen as one of the three most important effects. Severity, impact and distress only predicted 'putting on weight' which was more distressing, more severe and had more life impact in those for whom it was most important. MSE is a self-report questionnaire that identifies reliably the side-effect burden as experienced by patients. Identifying key side effects important to patients can act as a starting point for joint decision making on the type and the dose of medication.
Application of Firefly Algorithm for Parameter Estimation of Damped Compound Pendulum
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper presents an investigation into the parameter estimation of the damped compound pendulum using Firefly algorithm method. In estimating the damped compound pendulum, the system necessarily needs a good model. Therefore, the aim of the work described in this paper is to obtain a dynamic model of the damped compound pendulum. By considering a discrete time form for the system, an autoregressive with exogenous input (ARX model structures was selected. In order to collect input-output data from the experiment, the PRBS signal is used to be input signal to regulate the motor speed. Where, the output signal is taken from position sensor. Firefly algorithm (FA algorithm is used to estimate the model parameters based on model 2nd orders. The model validation was done by comparing the measured output against the predicted output in terms of the closeness of both outputs via mean square error (MSE value. The performance of FA is measured in terms of mean square error (MSE.
The Distribution of the Sample Minimum-Variance Frontier
Raymond Kan; Daniel R. Smith
2008-01-01
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...
Preamble and pilot symbol design for channel estimation in OFDM systems with null subcarriers
Directory of Open Access Journals (Sweden)
Ohno Shuichi
2011-01-01
Full Text Available Abstract In this article, design of preamble for channel estimation and pilot symbols for pilot-assisted channel estimation in orthogonal frequency division multiplexing system with null subcarriers is studied. Both the preambles and pilot symbols are designed to minimize the l 2 or the l ∞ norm of the channel estimate mean-squared errors (MSE in frequency-selective environments. We use convex optimization technique to find optimal power distribution to the preamble by casting the MSE minimization problem into a semidefinite programming problem. Then, using the designed optimal preamble as an initial value, we iteratively select the placement and optimally distribute power to the selected pilot symbols. Design examples consistent with IEEE 802.11a as well as IEEE 802.16e are provided to illustrate the superior performance of our proposed method over the equi-spaced equi-powered pilot symbols and the partially equi-spaced pilot symbols.
Design and installation of the MSE septum system in the new LSS4 extraction channel of the SPS
Balhan, B; Guinand, R; Luiz, F; Rizzo, A; Weterings, W; CERN. Geneva. AB Department
2003-01-01
For the extraction of the beam from the Super Proton Synchrotron (SPS) to ring 2 of the Large Hadron Collider (LHC) and the CERN Neutrino to Gran Sasso (CNGS) facility, a new fast-extraction system has been installed in the long straight section LSS4 of the SPS. Besides extraction bumpers, enlarged aperture quadrupoles and extraction kicker magnets (MKE), six conventional DC septum magnets (MSE) are used. These magnets are mounted on a single mobile retractable support girder, which is motorised in order to optimise the local SPS aperture during setting up. The MSE septa are connected by a so-called plug-in system to a rigid water-cooled bus bar, which itself is powered by water-cooled cables. In order to avoid destruction of the septum magnet coils by direct impact of the extracted beam, a dilution element (TPSG) has been placed immediately upstream of the first septum coil. The whole system is kept at the required vacuum pressure by ion pumps attached to separate modules (MP). In this note we present the de...
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Preliminary estimation of minimum target dose in intracavitary radiotherapy for cervical cancer
Energy Technology Data Exchange (ETDEWEB)
Ohara, Kiyoshi; Oishi-Tanaka, Yumiko; Sugahara, Shinji; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine
2001-08-01
In intracavitary radiotherapy (ICRT) for cervical cancer, minimum target dose (D{sub min}) will pertain to local disease control more directly than will reference point A dose (D{sub A}). However, ICRT has been performed traditionally without specifying D{sub min} since the target volume was not identified. We have estimated D{sub min} retrospectively by identifying tumors using magnetic resonance (MR) images. Pre- and posttreatment MR images of 31 patients treated with high-dose-rate ICRT were used. ICRT was performed once weekly at 6.0 Gy D{sub A}, and involved 2-5 insertions for each patient, 119 insertions in total. D{sub min} was calculated arbitrarily simply at the point A level using the tumor width (W{sub A}) to compare with D{sub A}. W{sub A} at each insertion was estimated by regression analysis with pre- and posttreatment W{sub A}. D{sub min} for each insertion varied from 3.0 to 46.0 Gy, a 16-fold difference. The ratio of total D{sub min} to total D{sub A} for each patient varied from 0.5 to 6.5. Intrapatient D{sub min} difference between the initial insertion and final insertion varied from 1.1 to 3.4. Preliminary estimation revealed that D{sub min} varies widely under generic dose prescription. Thorough D{sub min} specification will be realized when ICRT-applicator insertion is performed under MR imaging. (author)
Estimating Frequency by Interpolation Using Least Squares Support Vector Regression
Directory of Open Access Journals (Sweden)
Changwei Ma
2015-01-01
Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.
Monthly ENSO Forecast Skill and Lagged Ensemble Size
Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.
2018-04-01
The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.
Basis expansion model for channel estimation in LTE-R communication system
Directory of Open Access Journals (Sweden)
Ling Deng
2016-05-01
Full Text Available This paper investigates fast time-varying channel estimation in LTE-R communication systems. The Basis Expansion Model (BEM is adopted to fit the fast time-varying channel in a high-speed railway communication scenario. The channel impulse response is modeled as the sum of basis functions multiplied by different coefficients. The optimal coefficients are obtained by theoretical analysis. Simulation results show that a Generalized Complex-Exponential BEM (GCE-BEM outperforms a Complex-Exponential BEM (CE-BEM and a polynomial BEM in terms of Mean Squared Error (MSE. Besides, the MSE of the CE-BEM decreases gradually as the number of basis functions increases. The GCE-BEM has a satisfactory performance with the serious fading channel.
Directory of Open Access Journals (Sweden)
Hongjun Xu
2011-07-01
Full Text Available A channel and delay estimation algorithm for both positive and negative delay, based on the distributed Alamouti scheme, has been recently discussed for base-station–based asynchronous cooperative systems in frequency-flat fading channels. This paper extends the algorithm, the maximum likelihood estimator, to work in frequency-selective fading channels. The minimum mean square error (MMSE performance of channel estimation for both packet schemes and normal schemes is discussed in this paper. The symbol error rate (SER performance of equalisation and detection for both time-reversal space-time block code (STBC and single-carrier STBC is also discussed in this paper. The MMSE simulation results demonstrated the superior performance of the packet scheme over the normal scheme with an improvement in performance of up to 6 dB when feedback was used in the frequency-selective channel at a MSE of 3 x 10^{–2}. The SER simulation results showed that, although both the normal and packet schemes achieved similar diversity orders, the packet scheme demonstrated a 1 dB coding gain over the normal scheme at a SER of 10^{–5}. Finally, the SER simulations showed that the frequency-selective fading system outperformed the frequency-flat fading system.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
A Substantial Plume of Escaping Planetary Ions in the MSE Northern Hemisphere Observed by MAVEN
Dong, Y.; Fang, X.; Brain, D. A.; McFadden, J. P.; Halekas, J. S.; Connerney, J. E. P.; Curry, S.; Harada, Y.; Luhmann, J. G.; Jakosky, B. M.
2015-12-01
The Mars-solar wind interaction accelerates and transports planetary ions away from Mars through a number of processes, including pick-up by the electromagnetic fields. The Mars Atmospheric and Volatile EvolutioN (MAVEN) spacecraft has frequently detected strong escaping planetary ion fluxes in both tailward and upstream solar wind motional electric field directions since the beginning of its science phase in November 2014. Our statistical study using three-month MAVEN data from November 2014 through February 2015 illustrates a substantial plume-like escaping planetary ion population organized by the upstream electric field with strong fluxes widely distributed in the northern hemisphere of the Mars-Sun-Electric-field (MSE) coordinate system, which is generally consistent with model predictions. The plume constitutes an important planetary ion escape channel from the Martian atmosphere in addition to the tailward escape. The >25eV O+ escape rate through the plume is estimated to be ~35% of the tailward escape and ~25% of the total escape. We will compare the dynamics of the plume and tailward escaping ions based on their velocity-space distributions with respect to the electromagnetic fields. We will also discuss the variations of the plume characteristics between different ion species (O+, O2+, and CO2+) and from the effect of different solar wind and interplanetary magnetic field (IMF) conditions.
Directory of Open Access Journals (Sweden)
Zhenyi Jia
2017-12-01
Full Text Available Soil pollution by metal(loids resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As and cadmium (Cd pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loids in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid pollution.
Distributive estimation of frequency selective channels for massive MIMO systems
Zaib, Alam
2015-12-28
We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSE algorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimates the CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates which converges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE). © 2015 EURASIP.
Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao
2017-12-26
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.
Employment Effects of Minimum and Subminimum Wages. Recent Evidence.
Neumark, David
Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…
Minimum Wage Effects on Educational Enrollments in New Zealand
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
Who Benefits from a Minimum Wage Increase?
John W. Lopresti; Kevin J. Mumford
2015-01-01
This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...
Minimum wage hikes and the wage growth of low-wage workers
Joanna K Swaffield
2012-01-01
This paper presents difference-in-differences estimates of the impact of the British minimum wage on the wage growth of low-wage employees. Estimates of the probability of low-wage employees receiving positive wage growth have been significantly increased by the minimum wage upratings or hikes. However, whether the actual wage growth of these workers has been significantly raised or not depends crucially on the magnitude of the minimum wage hike considered. Findings are consistent with employ...
On the MSE Performance and Optimization of Regularized Problems
Alrashdi, Ayed
2016-11-01
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
Sturniolo, Simone; Pieruccini, Marco; Corti, Maurizio; Rigamonti, Attilio
2013-01-01
One dimensional (1)H NMR measurements have been performed to probe slow molecular motions in nitrile butadiene rubber (NBR) around its calorimetric glass transition temperature Tg. The purpose is to show how software aided data analysis can extract meaningful dynamical data from these measurements. Spin-lattice relaxation time, free induction decay (FID) and magic sandwich echo (MSE) measurements have been carried out at different values of the static field, as a function of temperature. It has been evidenced how the efficiency of the MSE signal in reconstructing the original FID exhibits a sudden minimum at a given temperature, with a slight dependence from the measuring frequency. Computer simulations performed with the software SPINEVOLUTION have shown that the minimum in the efficiency reconstruction of the MSE signal corresponds to the average motional frequency taking a value around the inter-proton coupling. The FID signals have been fitted with a truncated form of a newly derived exact correlation function for the transverse magnetization of a dipolar interacting spin pair, which allows one to avoid the restriction of the stationary and Gaussian approximations. A direct estimate of the conformational dynamics on approaching the Tg is obtained, and the results are in agreement with the analysis performed via the MSE reconstruction efficiency. The occurrence of a wide distribution of correlation frequencies for the chains motion, with a Vogel-Fulcher type temperature dependence, is addressed. A route for a fruitful study of the dynamics accompanying the glass transition by a variety of NMR measurements is thus proposed. Copyright © 2013 Elsevier Inc. All rights reserved.
Robust estimation of event-related potentials via particle filter.
Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito
2016-03-01
In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
MILIVOJEVIC, Z. N.
2010-02-01
Full Text Available In this paper the fundamental frequency estimation results of the MP3 modeled speech signal are analyzed. The estimation of the fundamental frequency was performed by the Picking-Peaks algorithm with the implemented Parametric Cubic Convolution (PCC interpolation. The efficiency of PCC was tested for Catmull-Rom, Greville and Greville two-parametric kernel. Depending on MSE, a window that gives optimal results was chosen.
Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal
Ahmed, Suhad; Qasim, Zainab
2018-05-01
This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).
Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR
Directory of Open Access Journals (Sweden)
Xueqian Liu
2013-01-01
Full Text Available Maximum likelihood (ML algorithm is the most common and effective parameter estimation method. However, when dealing with small sample and low signal-to-noise ratio (SNR, threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.
DEFF Research Database (Denmark)
Varneskov, Rasmus T.
2014-01-01
-top estimators are shown to be consistent, asymptotically unbiased, and mixed Gaussian at the optimal rate of convergence, n1/4. Exact bound on lower order terms are obtained using maximal inequalities and these are used to derive a conservative, MSE-optimal flat-top shrinkage. Additionally, bounds...
The SME gauge sector with minimum length
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
The SME gauge sector with minimum length
Energy Technology Data Exchange (ETDEWEB)
Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)
2017-12-15
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)
MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,
developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on
The Einstein-Hilbert gravitation with minimum length
Louzada, H. L. C.
2018-05-01
We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
Directory of Open Access Journals (Sweden)
Seyedtabaee Saeed
2010-01-01
Full Text Available This paper deals with configuration of an algorithm to be used in a speech-passing angle grinder noise-canceling headset. Angle grinder noise is annoying and interrupts ordinary oral communication. Meaning that, low SNR noisy condition is ahead. Since variation in angle grinder working condition changes noise statistics, the noise will be nonstationary with possible jumps in its power. Studies are conducted for picking an appropriate algorithm. A modified version of the well-known spectral subtraction shows superior performance against alternate methods. Noise estimation is calculated through a multi-band fast adapting scheme. The algorithm is adapted very quickly to the non-stationary noise environment while inflecting minimum musical noise and speech distortion on the processed signal. Objective and subjective measures illustrating the performance of the proposed method are introduced.
Predicting VQ Performance Bound for LSF Coding
Chatterjee, Saikat; Sreenivas, TV
2008-01-01
For vector quantization (VQ) of speech line spectrum frequency (LSF) parameters, we experimentally determine a mapping function between the mean square error (MSE) measure and the perceptually motivated average spectral distortion (SD) measure. Using the mapping function, we estimate the minimum bits/vector required for transparent quantization of telephone-band and wide-band speech LSF parameters, respectively, as 22 bits/vector and 36 bits/vector, where the distribution of LSF vector is mod...
Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients
Gorgees, HazimMansoor; Mahdi, FatimahAssim
2018-05-01
This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.
Directory of Open Access Journals (Sweden)
Kristofer Månsson
2018-04-01
Full Text Available This paper introduces shrinkage estimators (Ridge DOLS for the dynamic ordinary least squares (DOLS cointegration estimator, which extends the model for use in the presence of multicollinearity between the explanatory variables in the cointegration vector. Both analytically and by using simulation techniques, we conclude that our new Ridge DOLS approach exhibits lower mean square errors (MSE than the traditional DOLS method. Therefore, based on the MSE performance criteria, our Monte Carlo simulations demonstrate that our new method outperforms the DOLS under empirically relevant magnitudes of multicollinearity. Moreover, we show the advantages of this new method by more accurately estimating the environmental Kuznets curve (EKC, where the income and squared income are related to carbon dioxide emissions. Furthermore, we also illustrate the practical use of the method when augmenting the EKC curve with energy consumption. In summary, regardless of whether we use analytical, simulation-based, or empirical approaches, we can consistently conclude that it is possible to estimate these types of relationships in a considerably more accurate manner using our newly suggested method.
Estimation of minimum detectable concentration of chlorine in the blast furnace slag cement concrete
Energy Technology Data Exchange (ETDEWEB)
Naqvi, A.A., E-mail: aanaqvi@kfupm.edu.s [Department of Physics, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia); Maslehuddin, M. [Center for Engineering Research, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia); Garwan, M.A.; Nagadi, M.M. [Department of Physics, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia); Al-Amoudi, O.S.B. [Department of Civil Engineering, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia); Khateeb-ur-Rehman,; Raashid, M. [Department of Physics, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia)
2011-01-01
The Prompt Gamma Neutron Activation Analysis technique was used to measure the concentration of chloride in the blast furnace slag (BFS) cement concrete to assess the possibility of reinforcement corrosion. The experimental setup was optimized using Monte Carlo calculations. The BFS concrete specimens containing 0.8-3.5 wt.% chloride were prepared and the concentration of chlorine was evaluated by determining the yield of 6.11, 6.62, 7.41, 7.79 and 8.58 MeV gamma-rays. The Minimum Detectable Concentration (MDC) of chlorine in the BFS cement concrete was estimated. The best value of MDC limit of chlorine in the BFS cement concrete was found to be 0.034 {+-} 0.011 and 0.038 {+-} 0.012 wt.% for 6.11 and 6.62 MeV prompt gamma-rays. Within the statistical uncertainty the lower bound of the measured MDC of chlorine in the BFS cement concrete meets the maximum permissible limit of 0.03 wt.% of chloride set by the American Concrete Institute.
Labson, Victor F.; Clark, Roger N.; Swayze, Gregg A.; Hoefen, Todd M.; Kokaly, Raymond F.; Livo, K. Eric; Powers, Michael H.; Plumlee, Geoffrey S.; Meeker, Gregory P.
2010-01-01
All of the calculations and results in this report are preliminary and intended for the purpose, and only for the purpose, of aiding the incident team in assessing the extent of the spilled oil for ongoing response efforts. Other applications of this report are not authorized and are not considered valid. Because of time constraints and limitations of data available to the experts, many of their estimates are approximate, are subject to revision, and certainly should not be used as the Federal Government's final values for assessing volume of the spill or its impact to the environment or to coastal communities. Each expert that contributed to this report reserves the right to alter his conclusions based upon further analysis or additional information. An estimated minimum total oil discharge was determined by calculations of oil volumes measured as of May 17, 2010. This included oil on the ocean surface measured with satellite and airborne images and with spectroscopic data (129,000 barrels to 246,000 barrels using less and more aggressive assumptions, respectively), oil skimmed off the surface (23,500 barrels from U.S. Coast Guard [USCG] estimates), oil burned off the surface (11,500 barrels from USCG estimates), dispersed subsea oil (67,000 to 114,000 barrels), and oil evaporated or dissolved (109,000 to 185,000 barrels). Sedimentation (oil captured from Mississippi River silt and deposited on the ocean bottom), biodegradation, and other processes may indicate significant oil volumes beyond our analyses, as will any subsurface volumes such as suspended tar balls or other emulsions that are not included in our estimates. The lower bounds of total measured volumes are estimated to be within the range of 340,000 to 580,000 barrels as of May 17, 2010, for an estimated average minimum discharge rate of 12,500 to 21,500 barrels per day for 27 days from April 20 to May 17, 2010.
The minimum yield in channeling
International Nuclear Information System (INIS)
Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.
2000-01-01
A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation
Low Streamflow Forcasting using Minimum Relative Entropy
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Small area estimation for estimating the number of infant mortality in West Java, Indonesia
Anggreyani, Arie; Indahwati, Kurnia, Anang
2016-02-01
Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.
Minimum Wages and the Economic Well-Being of Single Mothers
Sabia, Joseph J.
2008-01-01
Using pooled cross-sectional data from the 1992 to 2005 March Current Population Survey (CPS), this study examines the relationship between minimum wage increases and the economic well-being of single mothers. Estimation results show that minimum wage increases were ineffective at reducing poverty among single mothers. Most working single mothers…
The impact of minimum wage adjustments on Vietnamese wage inequality
DEFF Research Database (Denmark)
Hansen, Henrik; Rand, John; Torm, Nina
Using Vietnamese Labour Force Survey data we analyse the impact of minimum wage changes on wage inequality. Minimum wages serve to reduce local wage inequality in the formal sectors by decreasing the gap between the median wages and the lower tail of the local wage distributions. In contrast, local...... wage inequality is increased in the informal sectors. Overall, the minimum wages decrease national wage inequality. Our estimates indicate a decrease in the wage distribution Gini coefficient of about 2 percentage points and an increase in the 10/50 wage ratio of 5-7 percentage points caused...... by the adjustment of the minimum wages from 2011to 2012 that levelled the minimum wage across economic sectors....
Directory of Open Access Journals (Sweden)
Abdulmalik Shehu Yaro
2017-01-01
Full Text Available Multilateration estimates aircraft position using the Time Difference Of Arrival (TDOA with a lateration algorithm. The Position Estimation (PE accuracy of the lateration algorithm depends on several factors which are the TDOA estimation error, the lateration algorithm approach, the number of deployed GRSs and the selection of the GRS reference used for the PE process. Using the minimum number of GRSs for 3D emitter PE, a technique based on the condition number calculation is proposed to select the suitable GRS reference pair for improving the accuracy of the PE using the lateration algorithm. Validation of the proposed technique was performed with the GRSs in the square and triangular GRS configuration. For the selected emitter positions, the result shows that the proposed technique can be used to select the suitable GRS reference pair for the PE process. A unity condition number is achieved for GRS pair most suitable for the PE process. Monte Carlo simulation result, in comparison with the fixed GRS reference pair lateration algorithm, shows a reduction in PE error of at least 70% for both GRS in the square and triangular configuration.
A minimum achievable PV electrical generating cost
International Nuclear Information System (INIS)
Sabisky, E.S.
1996-01-01
The role and share of photovoltaic (PV) generated electricity in our nation's future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price
Energy and environmental norms on Minimum Vital Flux
International Nuclear Information System (INIS)
Maran, S.
2008-01-01
By the end of the year will come into force the recommendations on Minimum Vital flow and operators of hydroelectric power plants will be required to make available part of water of their derivations in order to protect river ecosystems. In this article the major energy and environmental consequences of these rules, we report some quantitative evaluations and are discusses the proposals for overcoming the weaknesses of the approach in the estimation of Minimum Vital Flux [it
Estimation of ocular volume from axial length.
Nagra, Manbir; Gilmartin, Bernard; Logan, Nicola S
2014-12-01
To determine which biometric parameters provide optimum predictive power for ocular volume. Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm(3)) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were -2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm(3)) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R(2) values of 79.4% for TOV. Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Optimal and sub-optimal post-detection timing estimators for PET
International Nuclear Information System (INIS)
Hero, A.O.; Antoniadis, N.; Clinthorne, N.; Rogers, W.L.; Hutchins, G.D.
1990-01-01
In this paper the authors derive linear and non-linear approximations to the post-detection likelihood function for scintillator interaction time in nuclear particle detection systems. The likelihood function is the optimal statistic for performing detection and estimation of scintillator events and event times. The authors derive the likelihood function approximations from a statistical model for the post-detection waveform which is common in the optical communications literature and takes account of finite detector bandwidth, random gains, and thermal noise. They then present preliminary simulation results for the associated approximate maximum likelihood timing estimators which indicate that significant MSE improvements may be achieved for low post-detection signal-to-noise ratio
Directory of Open Access Journals (Sweden)
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
[Rapidly identify oligosaccharides in Morinda officinalis by UPLC-Q-TOF-MSE].
Hao, Qing-Xiu; Kang, Li-Ping; Zhu, Shou-Dong; Yu, Yi; Hu, Ming-Hua; Ma, Fang-Li; Zhou, Jie; Guo, Lan-Ping
2018-03-01
In this paper, an approach was applied for separation and identification of oligosaccharides in Morinda officinalis How by Ultra performance liquid chromatography/quadrupole time-of-flight mass spectrometry (UPLC-Q-TOF-MS) with collision energy. The separation was carried out on an ACQUITY UPLC BEH Amide C₁₈(2.1mm×100 mm，1.7 μm) with gradient elution using acetonitrile(A) and water(B) containing 0.1% ammonia as mobile phase at a flow rate of 0.2 mL·min⁻¹. The column temperature was maintained at 40 °C. The information of accurate mass and characteristic fragment ion were acquired by MSE in ESI negative mode in low and high collision energy. The chemical structures and formula of oligosaccharides were obtained and identified by the software of UNIFI and Masslynx 4.1 based on the accurate mass, fragment ions, neutral losses, mass error, reference substance, isotope information, the intensity of fragments, and retention time. A total of 19 inulin oligosaccharide structures were identified including D(+)-sucrose, 1-kestose, nystose, 1F-fructofuranosyl nystose and other inulin oligosaccharides (DP 5-18). This research provided important information about the inulin oligosaccharides in M. officinalis. The results would provide scientific basis for innovative utilization of M. officinalis. Copyright© by the Chinese Pharmaceutical Association.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Energy Technology Data Exchange (ETDEWEB)
Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
Method of statistical estimation of temperature minimums in binary systems
International Nuclear Information System (INIS)
Mireev, V.A.; Safonov, V.V.
1985-01-01
On the basis of statistical processing of literature data the technique for evaluation of temperature minima on liquidus curves in binary systems with common ion chloride systems being taken as an example, is developed. The systems are formed by 48 chlorides of 45 chemical elements including alkali, alkaline earth, rare earth and transition metals as well as Cd, In, Th. It is shown that calculation error in determining minimum melting points depends on topology of the phase diagram. The comparison of calculated and experimental data for several previously nonstudied systems is given
The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.
Bullinger, Lindsey Rose
2017-03-01
To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.
Abuzaid, Abdulrahman I.
2014-09-01
Efficient receiver designs for cooperative communication systems are becoming increasingly important. In previous work, cooperative networks communicated with the use of $L$ relays. As the receiver is constrained, it can only process $U$ out of $L$ relays. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this paper, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal $U-{opt}$. Furthermore, this receiver provides the freedom to choose $U ≤ U-{opt}$ for each frame depending upon the tolerable difference allowed for mean square error (MSE). Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings without affecting the BER performance of the system. Furthermore, in this paper the effect of channel estimation errors is investigated on the MSE performance of the amplify-and-forward (AF) cooperative relaying system.
minimum variance estimation of yield parameters of rubber tree
African Journals Online (AJOL)
2013-03-01
Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.
Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina
2012-07-10
We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The minimum work required for air conditioning process
International Nuclear Information System (INIS)
Alhazmy, Majed M.
2006-01-01
This paper presents a theoretical analysis based on the second law of thermodynamics to estimate the minimum work required for the air conditioning process. The air conditioning process for hot and humid climates involves reducing air temperature and humidity. In the present analysis the inlet state is the state of the environment which has also been chosen as the dead state. The final state is the human thermal comfort fixed at 20 o C dry bulb temperature and 60% relative humidity. The general air conditioning process is represented by an equivalent path consisting of an isothermal dehumidification followed by a sensible cooling. An exergy analysis is performed on each process separately. Dehumidification is analyzed as a separation process of an ideal mixture of air and water vapor. The variations of the minimum work required for the air conditioning process with the ambient conditions is estimated and the ratio of the work needed for dehumidification to the total work needed to perform the entire process is presented. The effect of small variations in the final conditions on the minimum required work is evaluated. Tolerating a warmer or more humid final condition can be an easy solution to reduce the energy consumptions during critical load periods
Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.
Mincy, Ronald B.
Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
Minimum mass of moderator required for criticality of homogeneous low-enriched uranium systems
Energy Technology Data Exchange (ETDEWEB)
Jordan, W.C.; Turner, J.C.
1992-12-01
A parametric calculational analysis has been performed in order to estimate the minimum mass of moderator required for criticality of homogeneous low-enriched uranium systems. The analysis was performed using a version of the SCALE-4.0 code system and the 27-group ENDF/B-IV cross-section library. Water-moderated uranyl fluoride (UO{sub 2}F{sub 2} and H{sub 2}O) and hydrofluoric-acid-moderated uranium hexaflouride (UF{sub 6} and HF) systems were considered in the analysis over enrichments of 1.4 to 5 wt % {sup 235}U. Estimates of the minimum critical volume, minimum critical mass of uranium, and the minimum mass of moderator required for criticality are presented. There was significant disagreement between the values generated in this study when compared with a similar undocumented study performed in 1983 using ANISN and the Knight-modified Hansen-Roach cross sections. An investigation into the cause of the disagreement was made, and the results are presented.
Minimum mass of moderator required for criticality of homogeneous low-enriched uranium systems
Energy Technology Data Exchange (ETDEWEB)
Jordan, W.C.; Turner, J.C.
1992-12-01
A parametric calculational analysis has been performed in order to estimate the minimum mass of moderator required for criticality of homogeneous low-enriched uranium systems. The analysis was performed using a version of the SCALE-4.0 code system and the 27-group ENDF/B-IV cross-section library. Water-moderated uranyl fluoride (UO[sub 2]F[sub 2] and H[sub 2]O) and hydrofluoric-acid-moderated uranium hexaflouride (UF[sub 6] and HF) systems were considered in the analysis over enrichments of 1.4 to 5 wt % [sup 235]U. Estimates of the minimum critical volume, minimum critical mass of uranium, and the minimum mass of moderator required for criticality are presented. There was significant disagreement between the values generated in this study when compared with a similar undocumented study performed in 1983 using ANISN and the Knight-modified Hansen-Roach cross sections. An investigation into the cause of the disagreement was made, and the results are presented.
International Nuclear Information System (INIS)
Campo, Antonio; Papari, Mohammad M.; Abu-Nada, Eiyad
2011-01-01
This paper addresses a detailed procedure for the accurate estimation of low Prandtl numbers of selected binary gas mixtures. In this context, helium (He) is the light primary gas and the heavier secondary gases are nitrogen (N 2 ), oxygen (O 2 ), xenon (Xe), carbon dioxide (CO 2 ), methane (CH 4 ), tetrafluoromethane or carbon tetrafluoride (CF 4 ) and sulfur hexafluoride (SF 6 ). The three thermophysical properties forming the Prandtl number of binary gas mixtures Pr mix are heat capacity at constant pressure C p,mix (thermodynamic property), viscosity η mix (transport property) and thermal conductivity λ mix (transport property), which in general depend on temperature T and molar gas composition w. The precise formulas for the calculation of the trio C p,mix , η mix , and λ mix are gathered from various dependable sources. When the set of computed Pr mix values for the seven binary gas mixtures He + N 2 , He + O 2 , He + Xe, He + CO 2 , He + CH 4 , He + CF 4 , He + SF 6 at atmospheric conditions T = 300 K, p = 1 atm is plotted against the molar gas composition w on the w-domain [0,1], the family of Pr mix (w) curves exhibited distinctive concave shapes. In the curves format, all Pr mix (w) curves initiate with Pr ∼ 0.7 at w = 0 (associated with light primary He). Forthwith, each Pr mix (w) curve descends to a unique minimum and thereafter ascend back to Pr ∼ 0.7 at the terminal point w = 1 (connected to heavier secondary gases). Overall, it was found that among the seven binary gas mixtures tested, the He + Xe gas mixture delivered the absolute minimum Prandtl number Pr mix,min = 0.12 at the optimal molar gas composition w opt = 0.975. - Highlights: →Accurate estimation of low Prandtl numbers for some helium-based binary gas mixtures. →The thermophysical properties of the gases are calculated with precise formulas. →The absolute minimum Prandtl number is delivered by the He + Xe binary gas mixture. →Application to experimental thermoacoustic
Calculation of the minimum critical mass of fissile nuclides
International Nuclear Information System (INIS)
Wright, R.Q.; Hopper, Calvin Mitchell
2008-01-01
The OB-1 method for the calculation of the minimum critical mass of fissile actinides in metal/water systems was described in a previous paper. A fit to the calculated minimum critical mass data using the extended criticality parameter is the basis of the revised method. The solution density (grams/liter) for the minimum critical mass is also obtained by a fit to calculated values. Input to the calculation consists of the Maxwellian averaged fission and absorption cross sections and the thermal values of nubar. The revised method gives more accurate values than the original method does for both the minimum critical mass and the solution densities. The OB-1 method has been extended to calculate the uncertainties in the minimum critical mass for 12 different fissile nuclides. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the minimum critical mass, either in percent or grams. Results have been obtained for U-233, U-235, Pu-236, Pu-239, Pu-241, Am-242m, Cm-243, Cm-245, Cf-249, Cf-251, Cf-253, and Es-254. Eight of these 12 nuclides are included in the ANS-8.15 standard.
Tsao, Tsu-Yu; Konty, Kevin J; Van Wye, Gretchen; Barbot, Oxiris; Hadler, James L; Linos, Natalia; Bassett, Mary T
2016-06-01
To assess potential reductions in premature mortality that could have been achieved in 2008 to 2012 if the minimum wage had been $15 per hour in New York City. Using the 2008 to 2012 American Community Survey, we performed simulations to assess how the proportion of low-income residents in each neighborhood might change with a hypothetical $15 minimum wage under alternative assumptions of labor market dynamics. We developed an ecological model of premature death to determine the differences between the levels of premature mortality as predicted by the actual proportions of low-income residents in 2008 to 2012 and the levels predicted by the proportions of low-income residents under a hypothetical $15 minimum wage. A $15 minimum wage could have averted 2800 to 5500 premature deaths between 2008 and 2012 in New York City, representing 4% to 8% of total premature deaths in that period. Most of these avertable deaths would be realized in lower-income communities, in which residents are predominantly people of color. A higher minimum wage may have substantial positive effects on health and should be considered as an instrument to address health disparities.
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight.
Komro, Kelli A; Livingston, Melvin D; Markowitz, Sara; Wagenaar, Alexander C
2016-08-01
To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year.
UBV photometry of dwarf novae in the brightness minimum
International Nuclear Information System (INIS)
Voloshina, I.B.; Lyutyj, V.M.
1983-01-01
Photoelectric one-night observations of the dwarf novae SS Cyg at minimum light evidence for the existence of eclipses in this system at the moments of conjuctions. The orbital inclination of the system is estimated to be i approximately 70 deg C. The components of this system are low-massive (white and red dwarf stars) and low-luminous objects. As the optical luminosity of the dwarf novae at the minimum light is dependent on the accretion disk and hot spot at its periphery, where the substance jet run out from a nondegenerated component falls, eclipses should be associated with the disk and hot spot. The white dwarf star contributes greatly to the luminosity at the minimum light, but its eclipses are possible only at i approximately 90 deg
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
Minimum Wages and School Enrollment of Teenagers: A Look at the 1990's.
Chaplin, Duncan D.; Turner, Mark D.; Pape, Andreas, D.
2003-01-01
Estimates the effects of higher minimum wages on school enrollment using the Common Core of Data. Controlling for local labor market conditions and state and year fixed effects, finds some evidence that higher minimum wages reduce teen school enrollment in states where students drop out before age 18. (23 references) (Author/PKP)
Minimum area requirements for an at-risk butterfly based on movement and demography.
Brown, Leone M; Crone, Elizabeth E
2016-02-01
Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.
Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.
2018-03-01
Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.
Minimum viable populations: Is there a 'magic number' for conservation practitioners?
Curtis H. Flather; Gregory D. Hayward; Steven R. Beissinger; Philip A. Stephens
2011-01-01
Establishing species conservation priorities and recovery goals is often enhanced by extinction risk estimates. The need to set goals, even in data-deficient situations, has prompted researchers to ask whether general guidelines could replace individual estimates of extinction risk. To inform conservation policy, recent studies have revived the concept of the minimum...
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
EFFECTS DISTRIBUTIVE THE WAGE MINIMUM IN MARKET OF LABOR CEARENSE
Directory of Open Access Journals (Sweden)
Joyciane Coelho Vasconcelos
2015-11-01
Full Text Available This paper analyses the contribution of the minimum wage (MW for the devolution of income from the labor market at Ceará in the period 2002-2012. This research was based on National Sample Survey (PNAD of the Brazilian Institute of Geography and Statistics (IBGE.It was used the simulation methodology proposed in DiNardo, Fortin and Lemieux (1996 from the estimated counterfactual Kernel density functions. The simulations were performed for females and males. The results revealed by the decompositions than the minimum wage, the degree of formalization and the personal attributes had impacts not concentrators to workers female and male. However, for women, the de-concentrating effect of the minimum wage is more intense in the sample compared to men. In summary, the simulations indicate the importance of the minimum wage to reduce the dispersion of labor income in recent years.
International Nuclear Information System (INIS)
Galindo Mondragon, A.
2014-01-01
This document reflects the current practice for design of MSE walls using Partial coefficients. A deep compassion between three of the most applied methodologies around the world for the design of this type of structures has been done (Galindo, 2012). In the study, almost all the limit states involved in an external and internal analysis were analyzed. The methodologies under study are the FHWA NHI-10-024 (2009), BS-8006 ((2010) and EBGEO (2010) used in United States, Great Britain and Germany, respectively. Like a complement of the analysis, the results of two examples developed with the three methodologies are presented, showing that exist a tendency to a more conservative wall design for EBGEO and BS 8006 in comparison with FHWA. (Author)
Diagnosing MJO Destabilization and Propagation with the Moisture and MSE Budgets
Maloney, Eric; Wolding, Brandon
2015-04-01
Novel diagnostics obtained as an extension of empirical orthogonal function analysis are used as a composting basis to gain insight into MJO dynamics through examination of reanalysis moisture and moist static energy budgets. The net effect of vertical moisture advection and cloud processes was found to be a modest positive feedback to column moisture anomalies during both enhanced and suppressed phases of the MJO. This positive feedback is regionally strengthened by anomalous surface fluxes of latent heat. The modulation of horizontal synoptic scale eddy mixing acts as a negative feedback to column moisture anomalies, while anomalous winds acting against the mean state moisture gradient aid in eastward propagation. These processes act in a systematic fashion across the Indian Ocean and oceanic regions of the Maritime Continent. The ability to approximately close the MSE budget serves an important role in constraining the moisture budget, whose residual is several times larger than the total and horizontal advective moisture tendencies. Comparison with TRMM precipitation anomalies suggests that the moisture budget residual results from an underestimation by ERAi of variations in both total precipitation and vertical moisture advection associated with the MJO. The results of this study support the concept of the MJO as a moisture-mode. This analysis is extended to examine the impact of boundary layer convergence driven by MJO SST anomalies on the vertically-integrated moisture budget. Results from a coupled version of the SP-CAM suggest that SST-driven moisture convergence anomalies are of a sufficient amplitude to be important for MJO propagation and destabilization, and may help explain why coupled models produce better simulations of the MJO than uncoupled models.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?
The Effects of Minimum Wage Throughout the Wage Distribution in Indonesia
Directory of Open Access Journals (Sweden)
Sri Gusvina Dewi
2018-03-01
Full Text Available The global financial crisis in 2007 followed by Indonesia’s largest labor demonstration in 2013 encouraged turmoils on Indonesia labor market. This paper examines the effect of the minimum wage on wage distribution in 2007 and 2014 and how the minimum wage increases in 2014 affected the distribution of wage differences between 2007 and 2014. This study employs recentered influence function (RIF regression method to estimate the wage function by using unconditional quantile regression. Furthermore, to measure the effect of the minimum wage increase in 2014 on the distribution of wage differences, it uses the Oaxaca–Blinder decomposition method. Using balanced panel data from the Indonesian Family Life Survey (IFLS, it found that the minimum wage mitigates wage disparity in 2007 and 2014. The minimum wage policy in 2014 leads to an increase in the wage difference between 2007 and 2014, with the largest wage difference being in the middle distribution.DOI: 10.15408/sjie.v7i2.6125
Minimum miscibility pressure estimation for a CO{sub 2}/n-decane system in porous media by X-ray CT
Energy Technology Data Exchange (ETDEWEB)
Liu, Yu; Jiang, Lanlan; Tang, Lingyue; Song, Yongchen; Zhao, Jiafei; Zhang, Yi; Wang, Dayong; Yang, Mingjun [Dalian University of Technology, Key Laboratory of Ocean Energy Utilization and Energy Conservation of Ministry of Education, Dalian (China)
2015-07-15
Accurate determination of gas-fluid miscibility conditions is important to optimize the displacement efficiency during CO{sub 2}-enhanced oil recovery. This paper presents a new technique to investigate the phase behavior and to estimate the minimum miscibility pressure (MMP) of a CO{sub 2}/n-decane system using an X-ray computerized tomography (CT) scanner. CT scans of the CO{sub 2}/n-decane system are taken at various pressures during the experiments. The image intensity values taken from the CT images have a linear relationship with the densities of the measured objects; therefore, we can estimate the miscible point of CO{sub 2} and n-decane because the difference between the intensity values for each phase decays to zero as the pressure increases toward the MMP. This paper provides experimental evidence for the validity of the new CT method by comparing the results with previous studies and presents an application of the method to investigate the MMP of the CO{sub 2}/n-decane system in porous media. Additionally, the influence of porous media on the equilibrium state when the CO{sub 2}/n-decane system is close to miscibility is discussed. (orig.)
The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.
Yuen, Terence
2003-01-01
Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)
Directory of Open Access Journals (Sweden)
Mardawia M Panrereng
2015-06-01
Full Text Available Dalam beberapa tahun terakhir, sistem komunikasi akustik bawah air banyak dikembangkan oleh beberapa peneliti. Besarnya tantangan yang dihadapi membuat para peneliti semakin tertarik untuk mengembangkan penelitian dibidang ini. Kanal bawah air merupakan media komunikasi yang sulit karena adanya attenuasi, absorption, dan multipath yang disebabkan oleh gerakan gelombang air setiap saat. Untuk perairan dangkal, multipath disebabkan adanya pantulan dari permukaan dan dasar laut. Kebutuhan pengiriman data cepat dengan bandwidth terbatas menjadikan Ortogonal Frequency Division Multiplexing (OFDM sebagai solusi untuk komunikasi transmisi tinggi dengan modulasi menggunakan Binary Phase-Shift Keying (BPSK. Estimasi kanal bertujuan untuk mengetahui karakteristik respon impuls kanal propagasi dengan mengirimkan pilot simbol. Pada estimasi kanal menggunakan metode Least Square (LS nilai Mean Square Error (MSE yang diperoleh cenderung lebih besar dari metode estimasi kanal menggunakan metode Minimum Mean Square (MMSE. Hasil kinerja estimasi kanal berdasarkan perhitungan Bit Error Rate (BER untuk estimasi kanal menggunakan metode LS dan metode MMSE tidak menunjukkan perbedaan yang signifikan yaitu berselisih satu SNR untuk setiap metode estimasi kanal yang digunakan.
Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro
2013-01-01
This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.
A New Method for Multisensor Data Fusion Based on Wavelet Transform in a Chemical Plant
Directory of Open Access Journals (Sweden)
Karim Salahshoor
2014-07-01
Full Text Available This paper presents a new multi-sensor data fusion method based on the combination of wavelet transform (WT and extended Kalman filter (EKF. Input data are first filtered by a wavelet transform via Daubechies wavelet “db4” functions and the filtered data are then fused based on variance weights in terms of minimum mean square error. The fused data are finally treated by extended Kalman filter for the final state estimation. The recent data are recursively utilized to apply wavelet transform and extract the variance of the updated data, which makes it suitable to be applied to both static and dynamic systems corrupted by noisy environments. The method has suitable performance in state estimation in comparison with the other alternative algorithms. A three-tank benchmark system has been adopted to comparatively demonstrate the performance merits of the method compared to a known algorithm in terms of efficiently satisfying signal-tonoise (SNR and minimum square error (MSE criteria.
Energy Technology Data Exchange (ETDEWEB)
Azam, Sikander; Khan, Saleem Ayaz; Khan, Wilayat [New Technologies – Research Center, University of West Bohemia, Univerzitni 8, 306 14 Pilsen (Czech Republic); Muhammad, Saleh; Udin, Haleem [Materials Modeling Lab, Department of Physics, Hazara University, Mansehra (Pakistan); Murtaza, G., E-mail: murtaza@icp.edu.pk [Materials Modeling Laboratory, Department of Physics, Islamia College University, Peshawar (Pakistan); Khenata, R., E-mail: khenata_rabah@yahoo.fr [Laboratoire de Physique Quantique et de Modélisation Mathématique (LPQ3M), Département de Technologie, Université de Mascara, Mascara 29000 (Algeria); Shah, Fahad Ali [Materials Modeling Lab, Department of Physics, Hazara University, Mansehra (Pakistan); Minar, Jan [New Technologies – Research Center, University of West Bohemia, Univerzitni 8, 306 14 Pilsen (Czech Republic); Ahmed, W.K. [ERU, College of Engineering, United Arab Emirates University, Al Ain (United Arab Emirates)
2015-09-25
Highlights: • The compounds are studied by FP-LAPW method within LDA, GGA, EV-GGA approximations. • All the compounds show indirect band gap nature. • Bonding nature is mixed covalent and ionic. • High absorption peaks and reflectivity ensures there utility in optoelectronic devices. - Abstract: Bonding nature as well as the electronic band structure, electronic charge density and optical properties of KBaMSe{sub 3} (M = As, Sb) compounds have been calculated using a full-potential augmented plane wave (FP-LAPW) method within the density functional theory. The exchange–correlation potential was handled with LDA and PBE-GGA approximations. Moreover, the Engel–Vosko generalized gradient approximation (EV-GGA) and the modified Beck–Johnson potential (mBJ) were also applied to improve the electronic band structure calculations. The study of band structure shows that KBaAsSe{sub 3}/KBaSbSe{sub 3} compounds have an indirect band gap of 2.08/2.10 eV which are in close agreement with the experimental data. The bonding nature has been studied as well using the electronic charge density (ECD) contour in the (1 0 1) crystallographic plane. It has been revealed that As/Sb–O atoms forms a strong covalent, while Ba–Se atoms form weak covalent bonding and the ionic bonding is mainly found between K and Ba atoms. Moreover, the complex dielectric function, absorption coefficient, refractive index, energy-loss spectrum and reflectivity have been estimated. From the reflectivity spectra, we found that KBaAsSe{sub 3} compound shows greater reflectivity than KBaSbSe{sub 3}, which means that KBaAsSe{sub 3} compound can be used as shielding material in visible and also in ultra violet region.
Ahmed, Sajid
2017-05-12
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim
2017-01-01
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Minimum Mean-Square Error Single-Channel Signal Estimation
DEFF Research Database (Denmark)
Beierholm, Thomas
2008-01-01
This topic of this thesis is MMSE signal estimation for hearing aids when only one microphone is available. The research is relevant for noise reduction systems in hearing aids. To fully benefit from the amplification provided by a hearing aid, noise reduction functionality is important as hearin...... algorithm. Although performance of the two algorithms is found comparable then the particle filter algorithm is doing a much better job tracking the noise.......-impaired persons in some noisy situations need a higher signal to noise ratio for speech to be intelligible when compared to normal-hearing persons. In this thesis two different methods to approach the MMSE signal estimation problem is examined. The methods differ in the way that models for the signal and noise...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...
The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.
Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar
2018-03-01
This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility ( 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2015-04-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.
SS Cygni: The accretion disk in eruption and at minimum light
International Nuclear Information System (INIS)
Kiplinger, A.L.
1979-01-01
Absolute spectrophotometric observations of the dwarf nova SS Cygni have been obtained at maximum light, during the subsequent decline, and at minimum light. In order to provide a critical test of accretion disk theory, a model for a steady-state α-model accretion disk has been constructed which utilizes a grid of stellar energy distributions to synthesize the disk flux. Physical parameters for the accretion disk at maximum light are set by estimates of the intrinsic luminosity of the system that result from a desynthesis of a composite minimum light energy distribution. At maximum light, agreements between observational and theoretical continuum slopes and the Balmer jump are remarkably good. The model fails, however, during the eruption decline and at minimum light. It appears that the physical character of an accretion disk at minimum light must radiacally differ from the disk observed at maximum light
Directory of Open Access Journals (Sweden)
Phan Thanh Noi
2016-12-01
Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.
Estimating near-shore wind resources
DEFF Research Database (Denmark)
Floors, Rogier Ralph; Hahmann, Andrea N.; Peña, Alfredo
An evaluation and sensitivity study using the WRF mesoscale model to estimate the wind in a coastal area is performed using a unique data set consisting of scanning, profiling and floating lidars. The ability of the WRF model to represent the wind speed was evaluated by running the model for a four...... grid spacings were performed for each of the two schemes. An evaluation of the wind profile using vertical profilers revealed small differences in modelled mean wind speed between the different set-ups, with the YSU scheme predicting slightly higher mean wind speeds. Larger differences between...... the different simulations were observed when comparing the root-mean-square error (RMSE) between modelled and measured wind, with the ERA interim-based simulations having the lowest errors. The simulations with finer horizontal grid spacing had a larger MSE. Horizontal transects of mean wind speed across...
Directory of Open Access Journals (Sweden)
Song-hao Shang
2010-09-01
Full Text Available Floods are essential for the regeneration and growth of floodplain forests in arid and semiarid regions. However, river flows, and especially flood flows, have decreased greatly with the increase of water diversion from rivers and/or reservoir regulation, resulting in severe deterioration of floodplain ecosystems. Estimation of the flood stage that will inundate the floodplain forest is necessary for the forest's restoration or protection. To balance water use for economic purposes and floodplain forest protection, the inundated forest width method is proposed for estimating the minimum flood stage for floodplain forests from the inundated forest width-stage curve. The minimum flood stage is defined as the breakpoint of the inundated forest width-stage curve, and is determined directly or analytically from the curve. For the analytical approach, the problem under consideration is described by a multi-objective optimization model, which can be solved by the ideal point method. Then, the flood flow at the minimum flood stage (minimum flood flow, which is useful for flow regulation, can be calculated from the stage-discharge curve. In order to protect the forest in a river floodplain in a semiarid area in Xinjiang subject to reservoir regulation upstream, the proposed method was used to determine the minimum flood stage and flow for the forest. Field survey of hydrology, topography, and forest distribution was carried out at typical cross sections in the floodplain. Based on the survey results, minimum flood flows for six typical cross sections were estimated to be between 306 m3/s and 393 m3/s. Their maximum, 393 m3/s, was considered the minimum flood flow for the study river reach. This provides an appropriate flood flow for the protection of floodplain forest and can be used in the regulation of the upstream reservoir.
Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...
On the effect of correlated measurements on the performance of distributed estimation
Ahmed, Mohammed
2013-06-01
We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple access channel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize the Mean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. The correlation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality is demonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlation coefficients is investigated. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.
Optimal Bandwidth Selection for Kernel Density Functionals Estimation
Directory of Open Access Journals (Sweden)
Su Chen
2015-01-01
Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.
Behavioral and physiological significance of minimum resting metabolic rate in king penguins.
Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y
2008-01-01
Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Lactate minimum in a ramp protocol and its validity to estimate the maximal lactate steady state
Directory of Open Access Journals (Sweden)
Emerson Pardono
2009-01-01
Full Text Available http://dx.doi.org/10.5007/1980-0037.2009v11n2p174 The objectives of this study were to evaluate the validity of the lactate minimum (LM using a ramp protocol for the determination of LM intensity (LMI, and to estimate the exercise intensity corresponding to maximal blood lactate steady state (MLSS. In addition, the possibility of determining aerobic and anaerobic fitness was investigated. Fourteen male cyclists of regional level performed one LM protocol on a cycle ergometer (Excalibur–Lode consisting of an incremental test at an initial workload of 75 Watts, with increments of 1 Watt every 6 seconds. Hyperlactatemia was induced by a 30-second Wingate anaerobic test (WAT (Monark–834E at a workload corresponding to 8.57% of the volunteer’s body weight. Peak power (11.5±2 Watts/kg, mean power output (9.8±1.7 Watts/kg, fatigue index (33.7±2.3% and lactate 7 min after WAT (10.5±2.3 mmol/L were determined. The incremental test identified LMI (207.8±17.7 Watts and its respective blood lactate concentration (2.9±0.7 mmol/L, heart rate (153.6±10.6 bpm, and also maximal aerobic power (305.2±31.0 Watts. MLSS intensity was identified by 2 to 4 constant exercise tests (207.8±17.7 Watts, with no difference compared to LMI and good agreement between the two parameters. The LM test using a ramp protocol seems to be a valid method for the identification of LMI and estimation of MLSS intensity in regional cyclists. In addition, both anaerobic and aerobic fitness parameters were identified during a single session.
International Nuclear Information System (INIS)
Odin, I.N.; Grin'ko, V.V.; Kozlovskij, V.F.; Safronov, E.V.; Gapanovich, M.V.
2004-01-01
Using data of differential thermal, X-ray phase and microstructural analyses, phase diagrams of reciprocal systems PbSe + MI 2 = MSe + PbI 2 (M=Hg (1), Mn (2), Sn (3)) were constructed. It was ascertained that the HgSe-PbI 2 diagonal in system 1 is stable. Transformations leading to crystallization of metastable ternary compound formed in the system PbSe-PbI 2 and metastable polytypes of lead iodide in systems 1 and 2 in the range of temperatures from 620 to 685 K were studied. New intermediate metastable phases in systems 1, 2 and 3 were prepared by melt quenching. Crystal lattice parameters of the phases crystallizing in the CdCl 2 structural type were defined [ru
Multivariate Generalized Multiscale Entropy Analysis
Directory of Open Access Journals (Sweden)
Anne Humeau-Heurtier
2016-11-01
Full Text Available Multiscale entropy (MSE was introduced in the 2000s to quantify systems’ complexity. MSE relies on (i a coarse-graining procedure to derive a set of time series representing the system dynamics on different time scales; (ii the computation of the sample entropy for each coarse-grained time series. A refined composite MSE (rcMSE—based on the same steps as MSE—also exists. Compared to MSE, rcMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy for short time series. The multivariate versions of MSE (MMSE and rcMSE (MrcMSE have also been introduced. In the coarse-graining step used in MSE, rcMSE, MMSE, and MrcMSE, the mean value is used to derive representations of the original data at different resolutions. A generalization of MSE was recently published, using the computation of different moments in the coarse-graining procedure. However, so far, this generalization only exists for univariate signals. We therefore herein propose an extension of this generalized MSE to multivariate data. The multivariate generalized algorithms of MMSE and MrcMSE presented herein (MGMSE and MGrcMSE, respectively are first analyzed through the processing of synthetic signals. We reveal that MGrcMSE shows better performance than MGMSE for short multivariate data. We then study the performance of MGrcMSE on two sets of short multivariate electroencephalograms (EEG available in the public domain. We report that MGrcMSE may show better performance than MrcMSE in distinguishing different types of multivariate EEG data. MGrcMSE could therefore supplement MMSE or MrcMSE in the processing of multivariate datasets.
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Alignment and girder position of MSE septa in the new LSS4 extraction channel of the SPS
Balhan, B; Rizzo, A; Weterings, W; CERN. Geneva. SPS and LHC Division
2002-01-01
For the extraction of the beam from the Super Proton Synchrotron (SPS) to ring 2 of the Large Hadron Collider (LHC) and the CERN Neutrino to Gran Sasso (CNGS)facility, a new fast-extraction system is being constructed in the long straight section LSS4 of the SPS. Besides extraction bumpers, enlarged aperture quadrupoles and extraction kicker magnets (MKE), six conventional DC septum magnets (MSE) are used. These magnets are mounted on a single rigid support girder, pre-aligned so as to follow the trajectory of the extracted beam and optimise the available aperture. The girder has been motorised in order to optimise the local SPS aperture during setting up, so as to avoid the risk of circulating beam impact on the septum coils. In this note, we briefly present the trajectory and apertures of the beam, we describe the calculations and methods that have been used to determine the magnet position on the girder, and finally we report on the details of the girder movement and alignment.
Runge-Kutta methods with minimum storage implementations
Ketcheson, David I.
2010-03-01
Solution of partial differential equations by the method of lines requires the integration of large numbers of ordinary differential equations (ODEs). In such computations, storage requirements are typically one of the main considerations, especially if a high order ODE solver is required. We investigate Runge-Kutta methods that require only two storage locations per ODE. Existing methods of this type require additional memory if an error estimate or the ability to restart a step is required. We present a new, more general class of methods that provide error estimates and/or the ability to restart a step while still employing the minimum possible number of memory registers. Examples of such methods are found to have good properties. © 2009 Elsevier Inc. All rights reserved.
Minimum Wages and Employment: A Case Study of the Fast Food Industry in New Jersey and Pennsylvania
David Card; Alan B. Krueger
1993-01-01
On April 1, 1992 New Jersey's minimum wage increased from $4.25 to $5.05 per hour. To evaluate the impact of the law we surveyed 410 fast food restaurants in New Jersey and Pennsylvania before and after the rise in the minimum. Comparisons of the changes in wages, employment, and prices at stores in New Jersey relative to stores in Pennsylvania (where the minimum wage remained fixed at $4.25 per hour) yield simple estimates of the effect of the higher minimum wage. Our empirical findings chal...
Mohr, Rachel M; Tomberlin, Jeffery K
2015-07-01
Understanding the onset and duration of adult blow fly activity is critical to accurately estimating the period of insect activity or minimum postmortem interval (minPMI). Few, if any, reliable techniques have been developed and consequently validated for using adult fly activity to determine a minPMI. In this study, adult blow flies (Diptera: Calliphoridae) of Cochliomyia macellaria and Chrysomya rufifacies were collected from swine carcasses in rural central Texas, USA, during summer 2008 and Phormia regina and Calliphora vicina in the winter during 2009 and 2010. Carcass attendance patterns of blow flies were related to species, sex, and oocyte development. Summer-active flies were found to arrive 4-12 h after initial carcass exposure, with both C. macellaria and C. rufifacies arriving within 2 h of one another. Winter-active flies arrived within 48 h of one another. There was significant difference in degree of oocyte development on each of the first 3 days postmortem. These frequency differences allowed a minPMI to be calculated using a binomial analysis. When validated with seven tests using domestic and feral swine and human remains, the technique correctly estimated time of placement in six trials.
Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-09-30
To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times
DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA
Huang, G; Nix, AR; Armour, SMD
2010-01-01
Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-04-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.
Directory of Open Access Journals (Sweden)
Caroline A Curtis
Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for
Anesthesiologists' perceptions of minimum acceptable work habits of nurse anesthetists.
Logvinov, Ilana I; Dexter, Franklin; Hindman, Bradley J; Brull, Sorin J
2017-05-01
Work habits are non-technical skills that are an important part of job performance. Although non-technical skills are usually evaluated on a relative basis (i.e., "grading on a curve"), validity of evaluation on an absolute basis (i.e., "minimum passing score") needs to be determined. Survey and observational study. None. None. The theme of "work habits" was assessed using a modification of Dannefer et al.'s 6-item scale, with scores ranging from 1 (lowest performance) to 5 (highest performance). E-mail invitations were sent to all consultant and fellow anesthesiologists at Mayo Clinic in Florida, Arizona, and Minnesota. Because work habits expectations can be generational, the survey was designed for adjustment based on all invited (responding or non-responding) anesthesiologists' year of graduation from residency. The overall mean±standard deviation of the score for anesthesiologists' minimum expectations of nurse anesthetists' work habits was 3.64±0.66 (N=48). Minimum acceptable scores were correlated with the year of graduation from anesthesia residency (linear regression P=0.004). Adjusting for survey non-response using all N=207 anesthesiologists, the mean of the minimum acceptable work habits adjusted for year of graduation was 3.69 (standard error 0.02). The minimum expectations for nurse anesthetists' work habits were compared with observational data obtained from the University of Iowa. Among 8940 individual nurse anesthetist work habits scores, only 2.6% were habits scores were significantly greater than the Mayo estimate (3.69) for the minimum expectations; all Phabits of nurse anesthetists within departments should not be compared with an appropriate minimum score (i.e., of 3.69). Instead, work habits scores should be analyzed based on relative reporting among anesthetists. Copyright © 2017 Elsevier Inc. All rights reserved.
Blind Estimation of the Phase and Carrier Frequency Offsets for LDPC-Coded Systems
Directory of Open Access Journals (Sweden)
Houcke Sebastien
2010-01-01
Full Text Available Abstract We consider in this paper the problem of phase offset and Carrier Frequency Offset (CFO estimation for Low-Density Parity-Check (LDPC coded systems. We propose new blind estimation techniques based on the calculation and minimization of functions of the Log-Likelihood Ratios (LLR of the syndrome elements obtained according to the parity check matrix of the error-correcting code. In the first part of this paper, we consider phase offset estimation for a Binary Phase Shift Keying (BPSK modulation and propose a novel estimation technique. Simulation results show that the proposed method is very effective and outperforms many existing algorithms. Then, we modify the estimation criterion so that it can work for higher-order modulations. One interesting feature of the proposed algorithm when applied to high-order modulations is that the phase offset of the channel can be blindly estimated without any ambiguity. In the second part of the paper, we consider the problem of CFO estimation and propose estimation techniques that are based on the same concept as the ones presented for the phase offset estimation. The Mean Squared Error (MSE and Bit Error Rate (BER curves show the efficiency of the proposed estimation techniques.
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
Energy Technology Data Exchange (ETDEWEB)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; Harmon-Smith, Miranda; Doud, Devin; Reddy, T. B. K.; Schulz, Frederik; Jarett, Jessica; Rivers, Adam R.; Eloe-Fadrosh, Emiley A.; Tringe, Susannah G.; Ivanova, Natalia N.; Copeland, Alex; Clum, Alicia; Becraft, Eric D.; Malmstrom, Rex R.; Birren, Bruce; Podar, Mircea; Bork, Peer; Weinstock, George M.; Garrity, George M.; Dodsworth, Jeremy A.; Yooseph, Shibu; Sutton, Granger; Glöckner, Frank O.; Gilbert, Jack A.; Nelson, William C.; Hallam, Steven J.; Jungbluth, Sean P.; Ettema, Thijs J. G.; Tighe, Scott; Konstantinidis, Konstantinos T.; Liu, Wen-Tso; Baker, Brett J.; Rattei, Thomas; Eisen, Jonathan A.; Hedlund, Brian; McMahon, Katherine D.; Fierer, Noah; Knight, Rob; Finn, Rob; Cochrane, Guy; Karsch-Mizrachi, Ilene; Tyson, Gene W.; Rinke, Christian; Kyrpides, Nikos C.; Schriml, Lynn; Garrity, George M.; Hugenholtz, Philip; Sutton, Granger; Yilmaz, Pelin; Meyer, Folker; Glöckner, Frank O.; Gilbert, Jack A.; Knight, Rob; Finn, Rob; Cochrane, Guy; Karsch-Mizrachi, Ilene; Lapidus, Alla; Meyer, Folker; Yilmaz, Pelin; Parks, Donovan H.; Eren, A. M.; Schriml, Lynn; Banfield, Jillian F.; Hugenholtz, Philip; Woyke, Tanja
2017-08-08
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.
comparative analysis of path loss prediction models for urban
African Journals Online (AJOL)
the acceptable minimum MSE value of 6dB for good signal propagation. Keywords: macrocellular areas ... itate high-speed data communications in ad- dition to voice calls. ... On the basis of the mobile radio environment, path loss predic-.
Speech Intelligibility Prediction Based on Mutual Information
DEFF Research Database (Denmark)
Jensen, Jesper; Taal, Cees H.
2014-01-01
This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Dopant density from maximum-minimum capacitance ratio of implanted MOS structures
International Nuclear Information System (INIS)
Brews, J.R.
1982-01-01
For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)
Zhang, Zhen; Zhang, Qianwu; Chen, Jian; Li, Yingchun; Song, Yingxiong
2016-06-13
A low-complexity joint symbol synchronization and SFO estimation scheme for asynchronous optical IMDD OFDM systems based on only one training symbol is proposed. Numerical simulations and experimental demonstrations are also under taken to evaluate the performance of the mentioned scheme. The experimental results show that robust and precise symbol synchronization and the SFO estimation can be achieved simultaneously at received optical power as low as -20dBm in asynchronous OOFDM systems. SFO estimation accuracy in MSE can be lower than 1 × 10-11 under SFO range from -60ppm to 60ppm after 25km SSMF transmission. Optimal System performance can be maintained until cumulate number of employed frames for calculation is less than 50 under above-mentioned conditions. Meanwhile, the proposed joint scheme has a low level of operation complexity comparing with existing methods, when the symbol synchronization and SFO estimation are considered together. Above-mentioned results can give an important reference in practical system designs.
Employment effects of minimum wages
Neumark, David
2014-01-01
The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.
Directory of Open Access Journals (Sweden)
Jinliang Xu
2013-06-01
Full Text Available This paper investigates the filtering problem for multivariate continuous nonlinear non-Gaussian systems based on an improved minimum error entropy (MEE criterion. The system is described by a set of nonlinear continuous equations with non-Gaussian system noises and measurement noises. The recently developed generalized density evolution equation is utilized to formulate the joint probability density function (PDF of the estimation errors. Combining the entropy of the estimation error with the mean squared error, a novel performance index is constructed to ensure the estimation error not only has small uncertainty but also approaches to zero. According to the conjugate gradient method, the optimal filter gain matrix is then obtained by minimizing the improved minimum error entropy criterion. In addition, the condition is proposed to guarantee that the estimation error dynamics is exponentially bounded in the mean square sense. Finally, the comparative simulation results are presented to show that the proposed MEE filter is superior to nonlinear unscented Kalman filter (UKF.
Centered Differential Waveform Inversion with Minimum Support Regularization
Kazei, Vladimir
2017-05-26
Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.
Hierarchical Bayes Small Area Estimation under a Unit Level Model with Applications in Agriculture
Directory of Open Access Journals (Sweden)
Nageena Nazir
2016-09-01
Full Text Available To studied Bayesian aspect of small area estimation using Unit level model. In this paper we proposed and evaluated new prior distribution for the ratio of variance components in unit level model rather than uniform prior. To approximate the posterior moments of small area means, Laplace approximation method is applied. This choice of prior avoids the extreme skewness, usually present in the posterior distribution of variance components. This property leads to more accurate Laplace approximation. We apply the proposed model to the analysis of horticultural data and results from the model are compared with frequestist approach and with Bayesian model of uniform prior in terms of average relative bias, average squared relative bias and average absolute bias. The numerical results obtained highlighted the superiority of using the proposed prior over the uniform prior. Thus Bayes estimators (with new prior of small area means have good frequentist properties such as MSE and ARB as compared to other traditional methods viz., Direct, Synthetic and Composite estimators.
On Channel Estimation for OFDM/TDM Using MMSE-FDE in a Fast Fading Channel
Directory of Open Access Journals (Sweden)
Gacanin Haris
2009-01-01
Full Text Available Abstract MMSE-FDE can improve the transmission performance of OFDM combined with time division multiplexing (OFDM/TDM, but knowledge of the channel state information and the noise variance is required to compute the MMSE weight. In this paper, a performance evaluation of OFDM/TDM using MMSE-FDE with pilot-assisted channel estimation over a fast fading channel is presented. To improve the tracking ability against fast fading a robust pilot-assisted channel estimation is presented that uses time-domain filtering on a slot-by-slot basis and frequency-domain interpolation. We derive the mean square error (MSE of the channel estimator and then discuss a tradeoff between improving the tracking ability against fading and the noise reduction. The achievable bit error rate (BER performance is evaluated by computer simulation and compared with conventional OFDM. It is shown that the OFDM/TDM using MMSE-FDE achieves a lower BER and a better tracking ability against fast fading in comparison with conventional OFDM.
Directory of Open Access Journals (Sweden)
E. Romero-Aguirre
2012-01-01
Full Text Available In this paper, a configurable superimposed training (ST/data-dependent ST (DDST transmitter and architecture based on array processors (APs for DDST channel estimation are presented. Both architectures, designed under full-hardware paradigm, were described using Verilog HDL, targeted in Xilinx Virtex-5 and they were compared with existent approaches. The synthesis results showed a FPGA slice consumption of 1% for the transmitter and 3% for the estimator with 160 and 115 MHz operating frequencies, respectively. The signal-to-quantization-noise ratio (SQNR performance of the transmitter is about 82 dB to support 4/16/64-QAM modulation. A Monte Carlo simulation demonstrates that the mean square error (MSE of the channel estimator implemented in hardware is practically the same as the one obtained with the floating-point golden model. The high performance and reduced hardware of the proposed architectures lead to the conclusion that the DDST concept can be applied in current communications standards.
Analysis of complex time series using refined composite multiscale entropy
International Nuclear Information System (INIS)
Wu, Shuen-De; Wu, Chiu-Wen; Lin, Shiou-Gwo; Lee, Kung-Yen; Peng, Chung-Kang
2014-01-01
Multiscale entropy (MSE) is an effective algorithm for measuring the complexity of a time series that has been applied in many fields successfully. However, MSE may yield an inaccurate estimation of entropy or induce undefined entropy because the coarse-graining procedure reduces the length of a time series considerably at large scales. Composite multiscale entropy (CMSE) was recently proposed to improve the accuracy of MSE, but it does not resolve undefined entropy. Here we propose a refined composite multiscale entropy (RCMSE) to improve CMSE. For short time series analyses, we demonstrate that RCMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy.
Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...
Fields, Gary S.; Kanbur, Ravi
2005-01-01
Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...
2010-02-08
... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...
A Pareto-Improving Minimum Wage
Eliav Danziger; Leif Danziger
2014-01-01
This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...
Sunspots During the Maunder Minimum from Machina Coelestis by Hevelius
Carrasco, V. M. S.; Álvarez, J. Villalba; Vaquero, J. M.
2015-10-01
We revisited the sunspot observations published by Johannes Hevelius in his book Machina Coelestis (1679) corresponding to the period of 1653 - 1675 (just in the middle of the Maunder Minimum). We show detailed translations of the original Latin texts describing the sunspot records and provide the general context of these sunspot observations. From this source, we present an estimate of the annual values of the group sunspot number based only on the records that explicitly inform us of the presence or absence of sunspots. Although we obtain very low values of the group sunspot number, in accordance with a grand minimum of solar activity, these values are significantly higher in general than the values provided by Hoyt and Schatten ( Solar Phys. 179, 189, 1998) for the same period.
Relay self interference minimisation using tapped filter
Jazzar, Saleh
2013-05-01
In this paper we introduce a self interference (SI) estimation and minimisation technique for amplify and forward relays. Relays are used to help forward signals between a transmitter and a receiver. This helps increase the signal coverage and reduce the required transmitted signal power. One problem that faces relays communications is the leaked signal from the relay\\'s output to its input. This will cause an SI problem where the new received signal at the relay\\'s input will be added with the unwanted leaked signal from the relay\\'s output. A Solution is proposed in this paper to estimate and minimise this SI which is based upon using a tapped filter at the destination. To get the optimum weights for this tapped filter, some channel parameters must be estimated first. This is performed blindly at the destination without the need of any training. This channel parameter estimation method is named the blind-self-interference-channel-estimation (BSICE) method. The next step in the proposed solution is to estimate the tapped filter\\'s weights. This is performed by minimising the mean squared error (MSE) at the destination. This proposed method is named the MSE-Optimum Weight (MSE-OW) method. Simulation results are provided in this paper to verify the performance of BSICE and MSE-OW methods. © 2013 IEEE.
International Nuclear Information System (INIS)
Dam, H. van; Leege, P.F.A. de
1987-01-01
An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Prediction technique for minimum-heat-flux (MHF)- point condition of saturated pool boiling
International Nuclear Information System (INIS)
Nishio, Shigefumi
1987-01-01
The temperature-controlled hypothesis for the minimum-heat-flux (MHF)-point condition, in which the MHF-point temperature is regarded as the controlling factor and is expected to be independent of surface configuration and dimensions, is inductively investigated for saturated pool-boiling. In this paper such features of the MHF-point condition are experimentally proved first. Secondly, a correlation of the MHF-point temperature is developed for the effect of system pressure. Finally, a simple technique based on this correlation is presented to estimate the effects of surface configuration, dimensions and system pressure on the minimum heat flux. (author)
Small sample GEE estimation of regression parameters for longitudinal data.
Paul, Sudhir; Zhang, Xuemao
2014-09-28
Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm
Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney
2014-01-01
Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...
HOTELLING'S T2 CONTROL CHARTS BASED ON ROBUST ESTIMATORS
Directory of Open Access Journals (Sweden)
SERGIO YÁÑEZ
2010-01-01
Full Text Available Under the presence of multivariate outliers, in a Phase I analysis of historical set of data, the T 2 control chart based on the usual sample mean vector and sample variance covariance matrix performs poorly. Several alternative estimators have been proposed. Among them, estimators based on the minimum volume ellipsoid (MVE and the minimum covariance determinant (MCD are powerful in detecting a reasonable number of outliers. In this paper we propose a T 2 control chart using the biweight S estimators for the location and dispersion parameters when monitoring multivariate individual observations. Simulation studies show that this method outperforms the T 2 control chart based on MVE estimators for a small number of observations.
The impact of the UK National Minimum Wage on mental health
Directory of Open Access Journals (Sweden)
Christoph Kronenberg
2017-12-01
Full Text Available Despite an emerging literature, there is still sparse and mixed evidence on the wider societal benefits of Minimum Wage policies, including their effects on mental health. Furthermore, causal evidence on the relationship between earnings and mental health is limited. We focus on low-wage earners, who are at higher risk of psychological distress, and exploit the quasi-experiment provided by the introduction of the UK National Minimum Wage (NMW to identify the causal impact of wage increases on mental health. We employ difference-in-differences models and find that the introduction of the UK NMW had no effect on mental health. Our estimates do not appear to support earlier findings which indicate that minimum wages affect mental health of low-wage earners. A series of robustness checks accounting for measurement error, as well as treatment and control group composition, confirm our main results. Overall, our findings suggest that policies aimed at improving the mental health of low-wage earners should either consider the non-wage characteristics of employment or potentially larger wage increases.
The impact of the UK National Minimum Wage on mental health.
Kronenberg, Christoph; Jacobs, Rowena; Zucchelli, Eugenio
2017-12-01
Despite an emerging literature, there is still sparse and mixed evidence on the wider societal benefits of Minimum Wage policies, including their effects on mental health. Furthermore, causal evidence on the relationship between earnings and mental health is limited. We focus on low-wage earners, who are at higher risk of psychological distress, and exploit the quasi-experiment provided by the introduction of the UK National Minimum Wage (NMW) to identify the causal impact of wage increases on mental health. We employ difference-in-differences models and find that the introduction of the UK NMW had no effect on mental health. Our estimates do not appear to support earlier findings which indicate that minimum wages affect mental health of low-wage earners. A series of robustness checks accounting for measurement error, as well as treatment and control group composition, confirm our main results. Overall, our findings suggest that policies aimed at improving the mental health of low-wage earners should either consider the non-wage characteristics of employment or potentially larger wage increases.
Improving boiler unit performance using an optimum robust minimum-order observer
International Nuclear Information System (INIS)
Moradi, Hamed; Bakhtiari-Nejad, Firooz
2011-01-01
Research highlights: → Multivariable model of a boiler unit with uncertainty. → Design of a robust minimum-order observer. → Developing an optimal functional code in MATLAB environment. → Finding optimum region of observer-based controller poles. → Guarantee of robust performance in the presence of parametric uncertainties. - Abstract: To achieve a good performance of the utility boiler, dynamic variables such as drum pressure, steam temperature and water level of drum must be controlled. In this paper, a linear time invariant (LTI) model of a boiler system is considered in which the input variables are feed-water and fuel mass rates. Due to the inaccessibility of some state variables of boiler system, a minimum-order observer is designed based on Luenberger's model to gain an estimate state x-tilde of the true state x. Low cost of design and high accuracy of states estimation are the main advantages of the minimum-order observer; in comparison with previous designed full-order observers. By applying the observer on the closed-loop system, a regulator system is designed. Using an optimal functional code developed in MATLAB environment, desired observer poles are found such that suitable time response specifications of the boiler system are achieved and the gain and phase margin values are adjusted in an acceptable range. However, the real dynamic model may associate with parametric uncertainties. In that case, optimum region of poles of observer-based controller are found such that the robust performance of the boiler system against model uncertainties is guaranteed.
An estimate of the number of tropical tree species
Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.
2015-01-01
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279
Minimum bias and underlying event studies at CDF
International Nuclear Information System (INIS)
Moggi, Niccolo
2010-01-01
Soft, non-perturbative, interactions are poorly understood from the theoretical point of view even though they form a large part of the hadronic cross section at the energies now available. We review the CDF studies on minimum-bias ad underlying event in p(bar p) collisions at 2 TeV. After proposing an operative definition of 'underlying event', we present part of a systematic set of measurements carried out by the CDF Collaboration with the goal to provide data to test and improve the QCD models of hadron collisions. Different analysis strategies of the underlying event and possible event topologies are discussed. Part of the CDF minimum-bias results are also presented: in this sample, that represent the full inelastic cross-section, we can test simultaneously our knowledge of all the components that concur to form hadronic interactions. Comparisons with MonteCarlo simulations are always shown along with the data. These measurements will also contribute to more precise estimates of the soft QCD background of high-p T observables.
Digital baseline estimation method for multi-channel pulse height analyzing
International Nuclear Information System (INIS)
Xiao Wuyun; Wei Yixiang; Ai Xianyun
2005-01-01
The basic features of digital baseline estimation for multi-channel pulse height analysis are introduced. The weight-function of minimum-noise baseline filter is deduced with functional variational calculus. The frequency response of this filter is also deduced with Fourier transformation, and the influence of parameters on amplitude frequency response characteristics is discussed. With MATLAB software, the noise voltage signal from the charge sensitive preamplifier is simulated, and the processing effect of minimum-noise digital baseline estimation is verified. According to the results of this research, digital baseline estimation method can estimate baseline optimally, and it is very suitable to be used in digital multi-channel pulse height analysis. (authors)
Multivariate refined composite multiscale entropy analysis
International Nuclear Information System (INIS)
Humeau-Heurtier, Anne
2016-01-01
Multiscale entropy (MSE) has become a prevailing method to quantify signals complexity. MSE relies on sample entropy. However, MSE may yield imprecise complexity estimation at large scales, because sample entropy does not give precise estimation of entropy when short signals are processed. A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. Nevertheless, RCMSE is for univariate signals only. The simultaneous analysis of multi-channel (multivariate) data often over-performs studies based on univariate signals. We therefore introduce an extension of RCMSE to multivariate data. Applications of multivariate RCMSE to simulated processes reveal its better performances over the standard multivariate MSE. - Highlights: • Multiscale entropy quantifies data complexity but may be inaccurate at large scale. • A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. • Nevertheless, RCMSE is adapted to univariate time series only. • We herein introduce an extension of RCMSE to multivariate data. • It shows better performances than the standard multivariate multiscale entropy.
Patriot Advanced Capability-3 Missile Segment Enhancement (PAC-3 MSE)
2015-12-01
Patriot Conduct of Fire Trainer PoP - Proof of Principle T - Threshold TADiL-J - Tactical Digital Information Link-Joint TADSS - Training Aids, Devices...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 5.272 0.311 0.411 0.398 0.000 0.126 0.000 -0.158 1.088 6.360 Current SAR Baseline to Current...Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 4.957 0.287 0.286 0.398 0.000 -0.244 0.000 -0.158 0.569 5.526
Can households earning minimum wage in Nova Scotia afford a nutritious diet?
Williams, Patricia L; Johnson, Christine P; Kratzmann, Meredith L V; Johnson, C Shanthi Jacob; Anderson, Barbara J; Chenhall, Cathy
2006-01-01
To assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia. Food costing data were collected in 43 randomly selected grocery stores throughout NS in 2002 using the National Nutritious Food Basket (NNFB). To estimate the affordability of a nutritious diet for households earning minimum wage, average monthly costs for essential expenses were subtracted from overall income to see if enough money remained for the cost of the NNFB. This was calculated for three types of household: 1) two parents and two children; 2) lone parent and two children; and 3) single male. Calculations were also made for the proposed 2006 minimum wage increase with expenses adjusted using the Consumer Price Index (CPI). The monthly cost of the NNFB priced in 2002 for the three types of household was 572.90 dollars, 351.68 dollars, and 198.73 dollars, respectively. Put into the context of basic living, these data showed that Nova Scotians relying on minimum wage could not afford to purchase a nutritious diet and meet their basic needs, placing their health at risk. These basic expenses do not include other routine costs, such as personal hygiene products, household and laundry cleaners, and prescriptions and costs associated with physical activity, education or savings for unexpected expenses. People working at minimum wage in Nova Scotia have not had adequate income to meet basic needs, including a nutritious diet. The 2006 increase in minimum wage to 7.15 dollars/hr is inadequate to ensure that Nova Scotians working at minimum wage are able to meet these basic needs. Wage increases and supplements, along with supports for expenses such as childcare and transportation, are indicated to address this public health problem.
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E
2010-06-01
In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig
2017-10-18
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.
Directory of Open Access Journals (Sweden)
Fei Jin
2013-05-01
Full Text Available This paper studies the generalized spatial two stage least squares (GS2SLS estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.
Optimal training sequences for indoor wireless optical communications
International Nuclear Information System (INIS)
Wang, Jun-Bo; Jiao, Yuan; Song, Xiaoyu; Chen, Ming
2012-01-01
Since indoor wireless optical communication (WOC) systems can offer several potential advantages over their radio frequency counterparts, there has been a growing interest in indoor WOC systems. Influenced by the complicated optical propagation environment, there exist multipath propagation phenomena. In order to eliminate the effect of multipath propagation, much attention should be concentrated on the channel estimation in indoor WOC systems. This paper investigates optimal training sequences (TSs) for estimating a channel impulse response in indoor WOC systems. Based on the Cramer–Rao bound (CRB) theorem, an explicit form of search criterion is found. Optimum TSs are obtained and tabulated by computer search for different channel responses and TS lengths. Measured by mean square error (MSE) performance, channel estimation errors are also investigated. Simulation results show that the MSE of the channel estimator at the receiver can be reduced significantly by using the optimized TS set. Moreover, the longer the TS, the better the MSE performance that can be obtained when the channel order is fixed. (paper)
Minimum critical crack depths in pressure vessels guidelines for nondestructive testing
International Nuclear Information System (INIS)
Crossley, M.R.; Townley, C.H.A.
1983-09-01
Estimates of the minimum critical depths which can be expected in high quality vessels designed to certain British and American Code rules are given. A simple means of allowing for fatigue crack growth in service is included. The data which are presented can be used to decide what sensitivity and what reporting levels should be employed during an ultrasonic inspection of a pressure vessel. It is emphasised that the minimum crack depths are those which would be relevant to a vessel in which the material is stressed to its maximum permitted value during operation. Stresses may, in practice, be significantly less than this. Less restrictive inspection standards may be established, if it were considered worthwhile to carry out a detailed stress analysis of the particular vessel under examination. (author)
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
Energy Technology Data Exchange (ETDEWEB)
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310, Johor Bahru (Malaysia); Bahar, Arifah [UTM Centre for Industrial and Applied Mathematics (UTM-CIAM), Universiti Teknologi Malaysia, 81310, Johor Bahru and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310, Johor Bahru (Malaysia); Ting, Chee-Ming [Center for Biomedical Engineering, Universiti Teknologi Malaysia, 81310, Johor Bahru (Malaysia)
2015-02-03
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
International Nuclear Information System (INIS)
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd; Bahar, Arifah; Ting, Chee-Ming
2015-01-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its
Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody
2018-04-01
To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.
Wang, Zhenzhong; Geng, Jianliang; Dai, Yi; Xiao, Wei; Yao, Xinsheng
2015-01-01
The broad applications and mechanism explorations of traditional Chinese medicine prescriptions (TCMPs) require a clear understanding of TCMP chemical constituents. In the present study, we describe an efficient and universally applicable analytical approach based on ultra-performance liquid chromatography coupled to electrospray ionization tandem quadrupole time-of-flight mass spectrometry (UPLC-ESI-Q/TOF-MS) with the MSE (E denotes collision energy) data acquisition mode, which allowed the rapid separation and reliable determination of TCMP chemical constituents. By monitoring diagnostic ions in the high energy function of MSE, target peaks of analogous compounds in TCMPs could be rapidly screened and identified. “Re-Du-Ning” injection (RDN), a eutherapeutic traditional Chinese medicine injection (TCMI) that has been widely used to reduce fever caused by viral infections in clinical practice, was studied as an example. In total, 90 compounds, including five new iridoids and one new sesquiterpene, were identified or tentatively characterized by accurate mass measurements within 5 ppm error. This analysis was accompanied by MS fragmentation and reference standard comparison analyses. Furthermore, the herbal sources of these compounds were unambiguously confirmed by comparing the extracted ion chromatograms (EICs) of RDN and ingredient herbal extracts. Our work provides a certain foundation for further studies of RDN. Moreover, the analytical approach developed herein has proven to be generally applicable for profiling the chemical constituents in TCMPs and other complicated mixtures. PMID:25875968
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Feedback brake distribution control for minimum pitch
Tavernini, Davide; Velenis, Efstathios; Longo, Stefano
2017-06-01
The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.
Estimating Climate Trends: Application to United States Plant Hardiness Zones
Directory of Open Access Journals (Sweden)
Nir Y. Krakauer
2012-01-01
Full Text Available The United States Department of Agriculture classifies plant hardiness zones based on mean annual minimum temperatures over some past period (currently 1976–2005. Since temperatures are changing, these values may benefit from updating. I outline a multistep methodology involving imputation of missing station values, geostatistical interpolation, and time series smoothing to update a climate variable’s expected value compared to a climatology period and apply it to estimating annual minimum temperature change over the coterminous United States. I show using hindcast experiments that trend estimation gives more accurate predictions of minimum temperatures 1-2 years in advance compared to the previous 30 years’ mean alone. I find that annual minimum temperature increased roughly 2.5 times faster than mean temperature (~2.0 K versus ~0.8 K since 1970, and is already an average of 1.2 0.5 K (regionally up to ~2 K above the 1976–2005 mean, so that much of the country belongs to warmer hardiness zones compared to the current map. The methods developed may also be applied to estimate changes in other climate variables and geographic regions.
Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment.
Brown, Charles; And Others
1983-01-01
The study finds that a 10 percent increase in the federal minimum wage (or the coverage rate) would reduce teenage (16-19) employment by about one percent, which is at the lower end of the range of estimates from previous studies. (Author/SSH)
Minimum DNBR Prediction Using Artificial Intelligence
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong Su; Kim, Ju Hyun; Na, Man Gyun [Chosun University, Gwangju (Korea, Republic of)
2011-05-15
The minimum DNBR (MDNBR) for prevention of the boiling crisis and the fuel clad melting is very important factor that should be consistently monitored in safety aspects. Artificial intelligence methods have been extensively and successfully applied to nonlinear function approximation such as the problem in question for predicting DNBR values. In this paper, support vector regression (SVR) model and fuzzy neural network (FNN) model are developed to predict the MDNBR using a number of measured signals from the reactor coolant system. Also, two models are trained using a training data set and verified against test data set, which does not include training data. The proposed MDNBR estimation algorithms were verified by using nuclear and thermal data acquired from many numerical simulations of the Yonggwang Nuclear Power Plant Unit 3 (YGN-3)
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
Youth minimum wages and youth employment
Marimpi, Maria; Koning, Pierre
2018-01-01
This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median
Discretization of space and time: determining the values of minimum length and minimum time
Roatta , Luca
2017-01-01
Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .
Minimum wage development in the Russian Federation
Bolsheva, Anna
2012-01-01
The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...
Improving boiler unit performance using an optimum robust minimum-order observer
Energy Technology Data Exchange (ETDEWEB)
Moradi, Hamed; Bakhtiari-Nejad, Firooz [Energy and Control Centre of Excellence, Department of Mechanical Engineering, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)
2011-03-15
To achieve a good performance of the utility boiler, dynamic variables such as drum pressure, steam temperature and water level of drum must be controlled. In this paper, a linear time invariant (LTI) model of a boiler system is considered in which the input variables are feed-water and fuel mass rates. Due to the inaccessibility of some state variables of boiler system, a minimum-order observer is designed based on Luenberger's model to gain an estimate state x of the true state x. Low cost of design and high accuracy of states estimation are the main advantages of the minimum-order observer; in comparison with previous designed full-order observers. By applying the observer on the closed-loop system, a regulator system is designed. Using an optimal functional code developed in MATLAB environment, desired observer poles are found such that suitable time response specifications of the boiler system are achieved and the gain and phase margin values are adjusted in an acceptable range. However, the real dynamic model may associate with parametric uncertainties. In that case, optimum region of poles of observer-based controller are found such that the robust performance of the boiler system against model uncertainties is guaranteed. (author)
Estimating right ventricular stroke work and the pulsatile work fraction in pulmonary hypertension.
Chemla, Denis; Castelain, Vincent; Zhu, Kaixian; Papelier, Yves; Creuzé, Nicolas; Hoette, Susana; Parent, Florence; Simonneau, Gérald; Humbert, Marc; Herve, Philippe
2013-05-01
The mean pulmonary artery pressure (mPAP) replaces mean systolic ejection pressure (msePAP) in the classic formula of right ventricular stroke work (RVSW) = (mPAP - RAP) × stroke volume, where RAP is mean right atrial pressure. Only the steady work is thus taken into account, not the pulsatile work, whereas pulmonary circulation is highly pulsatile. Our retrospective, high-fidelity pressure study tested the hypothesis that msePAP was proportional to mPAP, and looked at the implications for RVSW. Eleven patients with severe, precapillary pulmonary hypertension (PH) (six patients with idiopathic pulmonary arterial hypertension and five with chronic thromboembolic PH; mPAP = 57 ± 10 mm Hg) were studied at rest and during mild to moderate exercise. Eight non-PH control subjects were also studied at rest (mPAP = 16 ± 2 mm Hg). The msePAP was averaged from end diastole to dicrotic notch. In the full data set (53 pressure-flow points), mPAP ranged from 14 to 99.5 mm Hg, cardiac output from 2.38 to 11.1 L/min, and heart rate from 53 to 163 beats/min. There was a linear relationship between msePAP and mPAP (r² = 0.99). The msePAP matched 1.25 mPAP (bias, -0.5 ± 2.6 mm Hg). Results were similar in the resting non-PH group and in resting and the exercising PH group. This implies that the classic formula markedly underestimates RVSW and that the pulsatile work may be a variable 20% to 55% fraction of RVSW, depending on RAP and mPAP. At rest, RVSW in patients with PH was twice as high as that of the non-PH group (P work fraction was similar between the two groups (26 ± 4% vs 24 ± 1%) because of the counterbalancing effects of high RAP (11 ± 5 mm Hg vs 4 ± 2 mm Hg), which increases the fraction, and high mPAP, which decreases the fraction. Our study favored the use of an improved formula that takes into account the variable pulsatile work fraction: RVSW = (1.25 mPAP - RAP) × stroke volume. Increased RAP and increased mPAP have opposite effects on the pulsatile work
Minimum Energy Decentralized Estimation in a Wireless Sensor Network with Correlated Sensor Noises
Directory of Open Access Journals (Sweden)
Krasnopeev Alexey
2005-01-01
Full Text Available Consider the problem of estimating an unknown parameter by a sensor network with a fusion center (FC. Sensor observations are corrupted by additive noises with an arbitrary spatial correlation. Due to bandwidth and energy limitation, each sensor is only able to transmit a finite number of bits to the FC, while the latter must combine the received bits to estimate the unknown parameter. We require the decentralized estimator to have a mean-squared error ( that is within a constant factor to that of the best linear unbiased estimator (BLUE. We minimize the total sensor transmitted energy by selecting sensor quantization levels using the knowledge of noise covariance matrix while meeting the target requirement. Computer simulations show that our designs can achieve energy savings up to when compared to the uniform quantization strategy whereby each sensor generates the same number of bits, irrespective of the quality of its observation and the condition of its channel to the FC.
Energy Technology Data Exchange (ETDEWEB)
Dutton, Spencer M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisk, William J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-01-01
For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% as the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.
Minimum emittance of three-bend achromats
International Nuclear Information System (INIS)
Li Xiaoyu; Xu Gang
2012-01-01
The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)
Energy Technology Data Exchange (ETDEWEB)
Konijn, J; Malmskog, S
1962-06-15
The half-life of different isotopes, activated by neutrons in the reactor R2-0 by means of a pneumatic rabbit, have been measured with a pulse height analyzer working in its multiscale mode of operation. A Nal(Tl)-scintillation spectrometer was used as detector. Least squares analysis calculated by the Mercury computer were performed for each measurement. The following weighted mean values of the half-lives are obtained {sup 6}He: 0.862 {+-} 0.017 sec.; {sup 16}N: 7.31 {+-} 0.04 sec.; {sup 19}O: 29.1 {+-} 0.3 sec.; {sup 20}F: 11.56 {+-} 0.05 sec.; {sup 28}Al: 2.31 {+-} 0.01 min.; {sup 77m}Se: 18.83 - 0.04 sec.; and {sup 110}Ag: 24.42 {+-} 0.14 sec.
30 CFR 57.19021 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0. (c) Tail...
30 CFR 56.19021 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0-0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0-0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...
Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification
Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan
2018-01-01
This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.
30 CFR 77.1431 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation
International Nuclear Information System (INIS)
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (∼15–20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate K i and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final K i parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion
A Phosphate Minimum in the Oxygen Minimum Zone (OMZ) off Peru
Paulmier, A.; Giraud, M.; Sudre, J.; Jonca, J.; Leon, V.; Moron, O.; Dewitte, B.; Lavik, G.; Grasse, P.; Frank, M.; Stramma, L.; Garcon, V.
2016-02-01
The Oxygen Minimum Zone (OMZ) off Peru is known to be associated with the advection of Equatorial SubSurface Waters (ESSW), rich in nutrients and poor in oxygen, through the Peru-Chile UnderCurrent (PCUC), but this circulation remains to be refined within the OMZ. During the Pelágico cruise in November-December 2010, measurements of phosphate revealed the presence of a phosphate minimum (Pmin) in various hydrographic stations, which could not be explained so far and could be associated with a specific water mass. This Pmin, localized at a relatively constant layer ( 20minimum with a mean vertical phosphate decrease of 0.6 µM but highly variable between 0.1 and 2.2 µM. In average, these Pmin are associated with a predominant mixing of SubTropical Under- and Surface Waters (STUW and STSW: 20 and 40%, respectively) within ESSW ( 25%), complemented evenly by overlying (ESW, TSW: 8%) and underlying waters (AAIW, SPDW: 7%). The hypotheses and mechanisms leading to the Pmin formation in the OMZ are further explored and discussed, considering the physical regional contribution associated with various circulation pathways ventilating the OMZ and the local biogeochemical contribution including the potential diazotrophic activity.
Two-step estimation for inhomogeneous spatial point processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao
This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....
Energy expenditure, economic growth, and the minimum EROI of society
International Nuclear Information System (INIS)
Fizaine, Florian; Court, Victor
2016-01-01
We estimate energy expenditure for the US and world economies from 1850 to 2012. Periods of high energy expenditure relative to GDP (from 1850 to 1945), or spikes (1973–74 and 1978–79) are associated with low economic growth rates, and periods of low or falling energy expenditure are associated with high and rising economic growth rates (e.g. 1945–1973). Over the period 1960–2010 for which we have continuous year-to-year data for control variables (capital formation, population, and unemployment rate) we estimate that, statistically, in order to enjoy positive growth, the US economy cannot afford to spend more than 11% of its GDP on energy. Given the current energy intensity of the US economy, this translates in a minimum societal EROI of approximately 11:1 (or a maximum tolerable average price of energy of twice the current level). Granger tests consistently reveal a one way causality running from the level of energy expenditure (as a fraction of GDP) to economic growth in the US between 1960 and 2010. A coherent economic policy should be founded on improving net energy efficiency. This would yield a “double dividend”: increased societal EROI (through decreased energy intensity of capital investment), and decreased sensitivity to energy price volatility. - Highlights: •We estimate energy expenditures as a fraction of GDP for the US, the world (1850–2012), and the UK (1300–2008). •Statistically speaking, the US economy cannot afford to allocate more than 11% of its GDP to energy expenditures in order to have a positive growth rate. •This corresponds to a maximum tolerable average price of energy of twice the current level. •In the same way, US growth is only possible if its primary energy system has at least a minimum EROI of approximately 11:1.
Near-Threshold Computing and Minimum Supply Voltage of Single-Rail MCML Circuits
Directory of Open Access Journals (Sweden)
Ruiping Cao
2014-01-01
Full Text Available In high-speed applications, MOS current mode logic (MCML is a good alternative. Scaling down supply voltage of the MCML circuits can achieve low power-delay product (PDP. However, the current almost all MCML circuits are realized with dual-rail scheme, where the NMOS configuration in series limits the minimum supply voltage. In this paper, single-rail MCML (SRMCML circuits are described, which can avoid the devices configuration in series, since their logic evaluation block can be realized by only using MOS devices in parallel. The relationship between the minimum supply voltage of the SRMCML circuits and the model parameters of MOS transistors is derived, so that the minimum supply voltage can be estimated before circuit designs. An MCML dynamic flop-flop based on SRMCML is also proposed. The optimization algorithm for near-threshold sequential circuits is presented. A near-threshold SRMCML mode-10 counter based on the optimization algorithm is verified. Scaling down the supply voltage of the SRMCML circuits is also investigated. The power dissipation, delay, and power-delay products of these circuits are carried out. The results show that the near-threshold SRMCML circuits can obtain low delay and small power-delay product.
Potential minimum cost of electricity of superconducting coil tokamak power reactors
International Nuclear Information System (INIS)
Reid, R.L.; Peng, Y-K. M.
1989-01-01
The potential minimum cost of electricity (COE) for superconducting tokamak power reactors is estimated by increasing the physics (confinement, beta limit, bootstrap current fraction) and technology [neutral beam energy, toroidal field (TF) coil allowable stresses, divertor heat flux, superconducting coil critical field, critical temperature, and quench temperature rise] constraints far beyond those assumed for ITER until the point of diminishing returns is reached. A version of the TETRA systems code, calibrated with the ITER design and modified for power reactors, is used for this analysis, limiting this study to reactors with the same basic device configuration and costing algorithms as ITER. A minimum COE is reduced from >200 to about 80 mill/kWh when the allowable design constraints are raised to 2 times those of ITER. At 4 times the ITER allowables, a minimum COE of about 60 mill/kWh is obtained. The corresponding tokamak has a major radius of approximately 4 m, a plasma current close to 10 MA, an aspect ratio of 4, a confinement H- factor ≤3, a beta limit of approximately 2 times the first stability regime, a divertor heat flux of about 20 MW/m 2 , a Β max ≤ 18 T, and a TF coil average current density about 3 times that of ITER. The design constraints that bound the minimum COE are the allowable stresses in the TF coil, the neutral beam energy, and the 99% bootstrap current (essentially free current drive). 14 refs., 4 figs., 2 tabs
Singular value decomposition based feature extraction technique for physiological signal analysis.
Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C
2012-06-01
Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.
Two-step estimation for inhomogeneous spatial point processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao
2009-01-01
The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....
Energy Technology Data Exchange (ETDEWEB)
Konijn, J; Malmskog, S
1962-06-15
The half-life of different isotopes, activated by neutrons in the reactor R2-0 by means of a pneumatic rabbit, have been measured with a pulse height analyzer working in its multiscale mode of operation. A Nal(Tl)-scintillation spectrometer was used as detector. Least squares analysis calculated by the Mercury computer were performed for each measurement. The following weighted mean values of the half-lives are obtained {sup 6}He: 0.862 {+-} 0.017 sec.; {sup 16}N: 7.31 {+-} 0.04 sec.; {sup 19}O: 29.1 {+-} 0.3 sec.; {sup 20}F: 11.56 {+-} 0.05 sec.; {sup 28}Al: 2.31 {+-} 0.01 min.; {sup 77m}Se: 18.83 - 0.04 sec.; and {sup 110}Ag: 24.42 {+-} 0.14 sec.
12 CFR 564.4 - Minimum appraisal standards.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...
The minimum wage in the Czech enterprises
Eva Lajtkepová
2010-01-01
Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...
MORIKAWA Masayuki
2013-01-01
This paper, using prefecture level panel data, empirically analyzes 1) the recent evolution of price-adjusted regional minimum wages and 2) the effects of minimum wages on firm profitability. As a result of rapid increases in minimum wages in the metropolitan areas since 2007, the regional disparity of nominal minimum wages has been widening. However, the disparity of price-adjusted minimum wages has been shrinking. According to the analysis of the effects of minimum wages on profitability us...
A Note on the W-S Lower Bound of the MEE Estimation
Directory of Open Access Journals (Sweden)
Badong Chen
2014-02-01
Full Text Available The minimum error entropy (MEE estimation is concerned with the estimation of a certain random variable (unknown variable based on another random variable (observation, so that the entropy of the estimation error is minimized. This estimation method may outperform the well-known minimum mean square error (MMSE estimation especially for non-Gaussian situations. There is an important performance bound on the MEE estimation, namely the W-S lower bound, which is computed as the conditional entropy of the unknown variable given observation. Though it has been known in the literature for a considerable time, up to now there is little study on this performance bound. In this paper, we reexamine the W-S lower bound. Some basic properties of the W-S lower bound are presented, and the characterization of Gaussian distribution using the W-S lower bound is investigated.
41 CFR 50-201.1101 - Minimum wages.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...
Information and crystal structure estimation
International Nuclear Information System (INIS)
Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.
1984-01-01
The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)
Minimum Wage Laws and the Distribution of Employment.
Lang, Kevin
The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…
29 CFR 505.3 - Prevailing minimum compensation.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Prevailing minimum compensation. 505.3 Section 505.3 Labor... HUMANITIES § 505.3 Prevailing minimum compensation. (a)(1) In the absence of an alternative determination...)(2) of this section, the prevailing minimum compensation required to be paid under the Act to the...
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...
PERBANDINGAN ESTIMASI KEMAMPUAN LATEN ANTARA METODE MAKSIMUM LIKELIHOOD DAN METODE BAYES
Directory of Open Access Journals (Sweden)
Heri Retnawati
2015-10-01
Full Text Available Studi ini bertujuan untuk membandingkan ketepatan estimasi kemampuan laten (latent trait pada model logistik dengan metode maksimum likelihood (ML gabungan dan bayes. Studi ini menggunakan metode simulasi Monte Carlo, dengan model data ujian nasional matematika SMP. Variabel simulasi adalah panjang tes dan banyaknya peserta. Data dibangkitkan dengan menggunakan SAS/IML dengan replikasi 40 kali, dan tiap data diestimasi dengan ML dan Bayes. Hasil estimasi kemudian dibandingkan dengan kemampuan yang sebenarnya, dengan menghitung mean square of error (MSE dan korelasi antara kemampuan laten yang sebenarnya dan hasil estimasi. Metode yang memiliki MSE lebih kecil dikatakan sebagai metode estimasi yang lebih baik. Hasil studi menunjukkan bahwa pada estimasi kemampuan laten dengan 15, 20, 25, dan 30 butir dengan 500 dan 1.000 peserta, hasil MSE belum stabil, namun ketika peserta menjadi 1.500 orang, diperoleh akurasi estimasi kemampuan yang hampir sama baik estimasi antara metode ML dan metode Bayes. Pada estimasi dengan 15 dan 20 butir dan peserta 500, 1.000, dan 1.500, hasil MSE belum stabil, dan ketika estimasi melibatkan 25 dan 30 butir, baik dengan peserta 500, 1.000, maupun 1.500 akan diperoleh hasil yang lebih akurat dengan metode ML. Kata kunci: estimasi kemampuan, metode maksimum likelihood, metode Bayes THE COMPARISON OF ESTIMATION OF LATENT TRAITS USING MAXIMUM LIKELIHOOD AND BAYES METHODS Abstract This study aimed to compare the accuracy of the estimation of latent ability (latent trait in the logistic model using maximum likelihood (ML and Bayes methods. This study uses a quantitative approach that is the Monte Carlo simulation method using students responses to national examination as data model, and variables are the length of the test and the number of participants. The data were generated using SAS/IML with replication 40 times, and each datum is then estimated by ML and Bayes. The estimation results are then compared with the
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
29 CFR 4.159 - General minimum wage.
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...
About an adaptively weighted Kaplan-Meier estimate.
Plante, Jean-François
2009-09-01
The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.
Directory of Open Access Journals (Sweden)
Xinying Xu
2018-06-01
Full Text Available In this paper, a novel data-driven single neuron predictive control strategy is proposed for non-Gaussian networked control systems with metrology delays in the information theory framework. Firstly, survival information potential (SIP, instead of minimum entropy, is used to formulate the performance index to characterize the randomness of the considered systems, which is calculated by oversampling method. Then the minimum values can be computed by optimizing the SIP-based performance index. Finally, the proposed strategy, minimum entropy method and mean square error (MSE are applied to a networked motor control system, and results demonstrated the effectiveness of the proposed strategy.
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Accuracy of prognosis estimates by four palliative care teams: a prospective cohort study
Directory of Open Access Journals (Sweden)
Costantini Massimo
2002-03-01
Full Text Available Abstract Background Prognosis estimates are used to access services, but are often inaccurate. This study aimed to determine the accuracy of giving a prognosis range. Methods and measurements A prospective cohort study in four multi-professional palliative care teams in England collected data on 275 consecutive cancer referrals who died. Prognosis estimates (minimum – maximum at referral, patient characteristics, were recorded by staff, and later compared with actual survival. Results Minimum survival estimates ranged Conclusions Offering a prognosis range has higher levels of accuracy (about double than traditional estimates, but is still very often inaccurate, except very close to death. Where possible clinicians should discuss scenarios with patients, rather than giving a prognosis range.
Energy Technology Data Exchange (ETDEWEB)
Addai, Emmanuel Kwasi, E-mail: emmanueladdai41@yahoo.com; Gabel, Dieter; Krause, Ulrich
2016-04-15
Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.
International Nuclear Information System (INIS)
Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich
2016-01-01
Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
Minimum depth of investigation for grounded-wire TEM due to self-transients
Zhou, Nannan; Xue, Guoqiang
2018-05-01
The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.
Teaching the Minimum Wage in Econ 101 in Light of the New Economics of the Minimum Wage.
Krueger, Alan B.
2001-01-01
Argues that the recent controversy over the effect of the minimum wage on employment offers an opportunity for teaching introductory economics. Examines eight textbooks to determine topic coverage but finds little consensus. Describes how minimum wage effects should be taught. (RLH)
30 CFR 75.1431 - Minimum rope strength.
2010-07-01
..., including rotation resistant). For rope lengths less than 3,000 feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet...
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Directory of Open Access Journals (Sweden)
Darko Medved
2015-01-01
Full Text Available With the introduction of Solvency II a consistent market approach to the valuation of insurance assets and liabilities is required. For the best estimate of life annuity provisions one should estimate the longevity risk of the insured population in Slovenia. In this paper the current minimum standard in Slovenia for calculating pension annuities is tested using the Lee-Carter model. In particular, the mortality of the Slovenian population is projected using the best fit from the stochastic mortality projections method. The projected mortality statistics are then corrected with the selection effect and compared with the current minimum standard.
The impact of a federal cigarette minimum pack price policy on cigarette use in the USA.
Doogan, Nathan J; Wewers, Mary Ellen; Berman, Micah
2018-03-01
Increasing cigarette prices reduce cigarette use. The US Food and Drug Administration has the authority to regulate the sale and promotion-and therefore the price-of tobacco products. To examine the potential effect of federal minimum price regulation on the sales of cigarettes in the USA. We used yearly state-level data from the Tax Burden on Tobacco and other sources to model per capita cigarette sales as a function of price. We used the fitted model to compare the status quo sales with counterfactual scenarios in which a federal minimum price was set. The minimum price scenarios ranged from $0 to $12. The estimated price effect in our model was comparable with that found in the literature. Our counterfactual analyses suggested that the impact of a minimum price requirement could range from a minimal effect at the $4 level to a reduction of 5.7 billion packs sold per year and 10 million smokers at the $10 level. A federal minimum price policy has the potential to greatly benefit tobacco control and public health by uniformly increasing the price of cigarettes and by eliminating many price-reducing strategies currently available to both sellers and consumers. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Long-Term Capital Goods Importation and Minimum Wage Relationship in Turkey: Bounds Testing Approach
Directory of Open Access Journals (Sweden)
Tastan Serkan
2015-04-01
Full Text Available In order to examine the long-term relationship between capital goods importation and minimum wage, autoregressive distributed lag (ARDL bounds testing approach to the cointegration is used in the study. According to bounds test results, a cointegration relation exists between the capital goods importation and the minimum wage. Therefore an ARDL(4,0 model is estimated in order to determine the long and short term relations between variables. According to the empirical analysis, there is a positive and significant relationship between the capital goods importation and the minimum wage in Turkey in the long term. A 1% increase in the minimum wage leads to a 0.8% increase in the capital goods importation in the long term. The result is similar for short term coefficients. The relationship observed in the long term is preserved in short term, though in a lower level. In terms of error correction model, it can be concluded that error correction mechanism works as the error correction term is negative and significant. Short term deviations might be resolved with the error correction mechanism in the long term. Accordingly, approximately 75% of any deviation from equilibrium which might arise in the previous six month period will be resolved in the current six month period. This means that returning to long term equilibrium progresses rapidly.
Optimal estimation of the optomechanical coupling strength
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
30 CFR 281.30 - Minimum royalty.
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...
PERAMALAN PERMINTAAN KOMODITI PAPRIKA (CAPSICUM ANNUM DI PT BIMANDIRI AGRO SEDAYA, LEMBANG
Directory of Open Access Journals (Sweden)
Puji Rahmawati Nurcahyani
2016-11-01
Full Text Available PT Bimandiri Agro Sedaya is a non- manufacturing company ( services which operate in the field of trade as a supplier of fresh vegetables to retail. In December 2013, the fulfillment of the demand of red paprika, yellow paprika dan green paprika are 70,09 %; 70,24 %, 73,95 % respectively, so we need a method of accurately forecasting demand to estimate the demand of paprika early. The data is demand of red, yellow and green paprika commodities during September to December 2013. The results of pattern data analysis by least squares method and autocorrelation function shows that data have stationery pattern so used moving average method, single exponential smoothing and ARIMA. The result of MSE shows ARIMA metohd has the lowest MSE value for whole paprika. ARIMA method for red paprika, yellow paprika and green paprika are ARIMA (1,1,2 with MSE of 434,7;ARIMA (2,1,3 with MSE of 164,4 and ARIMA (1,0,1 with MSE of 321,9 respectively
Kesselmeier, Miriam; Lorenzo Bermejo, Justo
2017-11-01
Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
State cigarette minimum price laws - United States, 2009.
2010-04-09
Cigarette price increases reduce the demand for cigarettes and thereby reduce smoking prevalence, cigarette consumption, and youth initiation of smoking. Excise tax increases are the most effective government intervention to increase the price of cigarettes, but cigarette manufacturers use trade discounts, coupons, and other promotions to counteract the effects of these tax increases and appeal to price-sensitive smokers. State cigarette minimum price laws, initiated by states in the 1940s and 1950s to protect tobacco retailers from predatory business practices, typically require a minimum percentage markup to be added to the wholesale and/or retail price. If a statute prohibits trade discounts from the minimum price calculation, these laws have the potential to counteract discounting by cigarette manufacturers. To assess the status of cigarette minimum price laws in the United States, CDC surveyed state statutes and identified those states with minimum price laws in effect as of December 31, 2009. This report summarizes the results of that survey, which determined that 25 states had minimum price laws for cigarettes (median wholesale markup: 4.00%; median retail markup: 8.00%), and seven of those states also expressly prohibited the use of trade discounts in the minimum retail price calculation. Minimum price laws can help prevent trade discounting from eroding the positive effects of state excise tax increases and higher cigarette prices on public health.
The application of backpropagation neural network method to estimate the sediment loads
Directory of Open Access Journals (Sweden)
Ari Gunawan Taufik
2017-01-01
Full Text Available Nearly all formulations of conventional sediment load estimation method were developed based on a review of laboratory data or data field. This approach is generally limited by local so it is only suitable for a particular river typology. From previous studies, the amount of sediment load tends to be non-linear with respect to the hydraulic parameters and parameter that accompanies sediment. The dominant parameter is turbulence, whereas turbulence flow velocity vector direction of x, y and z. They were affected by water bodies in 3D morphology of the cross section of the vertical and horizontal. This study is conducted to address the non-linear nature of the hydraulic parameter data and sediment parameter against sediment load data by applying the artificial neural network (ANN method. The method used is the backpropagation neural network (BPNN schema. This scheme used for projecting the sediment load from the hydraulic parameter data and sediment parameters that used in the conventional estimation of sediment load. The results showed that the BPNN model performs reasonably well on the conventional calculation, indicated by the stability of correlation coefficient (R and the mean square error (MSE.
Reassessing Wind Potential Estimates for India: Economic and Policy Implications
Energy Technology Data Exchange (ETDEWEB)
Phadke, Amol; Bharvirkar, Ranjit; Khangura, Jagmeet
2011-09-15
We assess developable on-shore wind potential in India at three different hub-heights and under two sensitivity scenarios – one with no farmland included, the other with all farmland included. Under the “no farmland included” case, the total wind potential in India ranges from 748 GW at 80m hub-height to 976 GW at 120m hub-height. Under the “all farmland included” case, the potential with a minimum capacity factor of 20 percent ranges from 984 GW to 1,549 GW. High quality wind energy sites, at 80m hub-height with a minimum capacity factor of 25 percent, have a potential between 253 GW (no farmland included) and 306 GW (all farmland included). Our estimates are more than 15 times the current official estimate of wind energy potential in India (estimated at 50m hub height) and are about one tenth of the official estimate of the wind energy potential in the US.
International Nuclear Information System (INIS)
El-Shanshoury, Gh.I.
2015-01-01
Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October
9 CFR 147.51 - Authorized laboratory minimum requirements.
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Authorized laboratory minimum requirements. 147.51 Section 147.51 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE... Authorized Laboratories and Approved Tests § 147.51 Authorized laboratory minimum requirements. These minimum...
Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich
2016-04-15
The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. Copyright © 2016 Elsevier B.V. All rights reserved.
A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video
DEFF Research Database (Denmark)
Forchhammer, Søren; Martins, Bo
2002-01-01
to the desired format. The processing involves an estimated quality of individual pixels based on MPEG image type and local quantization value. The mean-squared error (MSE) is reduced, compared to the directly decoded sequence, and annoying ringing artifacts, including mosquito noise, are effectively suppressed....... The superresolution pictures obtained by the algorithm are of much higher visual quality and have lower MSE than superresolution pictures obtained by simple spatial interpolation....
Minimum Price Guarantees In a Consumer Search Model
M.C.W. Janssen (Maarten); A. Parakhonyak (Alexei)
2009-01-01
textabstractThis paper is the first to examine the effect of minimum price guarantees in a sequential search model. Minimum price guarantees are not advertised and only known to consumers when they come to the shop. We show that in such an environment, minimum price guarantees increase the value of
Vertical and horizontal extension of the oxygen minimum zone in the eastern South Pacific Ocean
Fuenzalida, Rosalino; Schneider, Wolfgang; Garcés-Vargas, José; Bravo, Luis; Lange, Carina
2009-07-01
Recent hydrographic measurements within the eastern South Pacific (1999-2001) were combined with vertically high-resolution data from the World Ocean Circulation Experiment, high-resolution profiles and bottle casts from the World Ocean Database 2001, and the World Ocean Atlas 2001 in order to evaluate the vertical and horizontal extension of the oxygen minimum zone (oxygen minimum zone to be 9.82±3.60×10 6 km 2 and 2.18±0.66×10 6 km 3, respectively. The oxygen minimum zone is thickest (>600 m) off Peru between 5 and 13°S and to about 1000 km offshore. Its upper boundary is shallowest (zone in some places. Offshore, the thickness and meridional extent of the oxygen minimum zone decrease until it finally vanishes at 140°W between 2° and 8°S. Moving southward along the coast of South America, the zonal extension of the oxygen minimum zone gradually diminishes from 3000 km (15°S) to 1200 km (20°S) and then to 25 km (30°S); only a thin band is detected at ˜37°S off Concepción, Chile. Simultaneously, the oxygen minimum zone's maximum thickness decreases from 300 m (20°S) to less than 50 m (south of 30°S). The spatial distribution of Ekman suction velocity and oxygen minimum zone thickness correlate well, especially in the core. Off Chile, the eastern South Pacific Intermediate Water mass introduces increased vertical stability into the upper water column, complicating ventilation of the oxygen minimum zone from above. In addition, oxygen-enriched Antarctic Intermediate Water clashes with the oxygen minimum zone at around 30°S, causing a pronounced sub-surface oxygen front. The new estimates of vertical and horizontal oxygen minimum zone distribution in the eastern South Pacific complement the global quantification of naturally hypoxic continental margins by Helly and Levin [2004. Global distribution of naturally occurring marine hypoxia on continental margins. Deep-Sea Research I 51, 1159-1168] and provide new baseline data useful for studies on the
Wage inequality, minimum wage effects and spillovers
Stewart, Mark B.
2011-01-01
This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...
Free Magnetic Energy in Solar Active Regions above the Minimum-Energy Relaxed State
Regnier, S.; Priest, E. R.
2008-01-01
To understand the physics of solar flares, including the local reorganization of the magnetic field and the acceleration of energetic particles, we have first to estimate the free magnetic energy available for such phenomena, which can be converted into kinetic and thermal energy. The free magnetic energy is the excess energy of a magnetic configuration compared to the minimum-energy state, which is a linear force-free field if the magnetic helicity of the configuration is conserved. We inves...
Minimum radwaste system to support commercial operation-what equipment can be deferred
International Nuclear Information System (INIS)
Marshall, R.W.; Tafazzoli, M.M.
1984-01-01
Because of cash flow problems being experienced by utilities as nuclear power stations approach completion, areas of the plant for which the completion of the construction effort could be deferred past commercial operation should be reviewed. The radwaste treatment systems are prime candidates for such a deferral because of the availability, either temporary or permanent, of alternative treatment methods for the waste streams expected to be produced. In order to identify the radwaste equipment, components and associated hardware in the radwaste building which could be deferred past commercial operation, a study was performed by Impell Corporation to evaluate the existing radwaste treatment system and determine the minimum system necessary to support commercial operation of a typical BWR. The study identified the minimum-installed radwaste treatment system which, in combination with portable temporary equipment, would accommodate the waste types and quantities likely to be produced in the first few years of operation. In addition, the minimum-installed system had to be licensable and excessive radiation exposures should not be incurred during the construction of the deferred portions of the system after commercial operation. From this study, it was concluded that a significant quantity of radwaste processing equipment and the associated piping, valves and instrumentation could be deferred. The estimated savings, in construction manhours (excluding field distributables) alone, was over 102,000 M-H
Minimum Covers of Fixed Cardinality in Weighted Graphs.
White, Lee J.
Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…
Robust and bias-corrected estimation of the coefficient of tail dependence
DEFF Research Database (Denmark)
Dutang, C.; Goegebeur, Y.; Guillou, A.
2014-01-01
We introduce a robust and asymptotically unbiased estimator for the coefficient of tail dependence in multivariate extreme value statistics. The estimator is obtained by fitting a second order model to the data by means of the minimum density power divergence criterion. The asymptotic properties ...
The complexity of computing the MCD-estimator
DEFF Research Database (Denmark)
Bernholt, T.; Fischer, Paul
2004-01-01
In modem statistics the robust estimation of parameters is a central problem, i.e., an estimation that is not or only slightly affected by outliers in the data. The minimum covariance determinant (MCD) estimator (J. Amer. Statist. Assoc. 79 (1984) 871) is probably one of the most important robust...... estimators of location and scatter. The complexity of computing the MCD, however, was unknown and generally thought to be exponential even if the dimensionality of the data is fixed. Here we present a polynomial time algorithm for MCD for fixed dimension of the data. In contrast we show that computing...... the MCD-estimator is NP-hard if the dimension varies. (C) 2004 Elsevier B.V. All rights reserved....
Impact of cigarette minimum price laws on the retail price of cigarettes in the USA.
Tynan, Michael A; Ribisl, Kurt M; Loomis, Brett R
2013-05-01
Cigarette price increases prevent youth initiation, reduce cigarette consumption and increase the number of smokers who quit. Cigarette minimum price laws (MPLs), which typically require cigarette wholesalers and retailers to charge a minimum percentage mark-up for cigarette sales, have been identified as an intervention that can potentially increase cigarette prices. 24 states and the District of Columbia have cigarette MPLs. Using data extracted from SCANTRACK retail scanner data from the Nielsen company, average cigarette prices were calculated for designated market areas in states with and without MPLs in three retail channels: grocery stores, drug stores and convenience stores. Regression models were estimated using the average cigarette pack price in each designated market area and calendar quarter in 2009 as the outcome variable. The average difference in cigarette pack prices are 46 cents in the grocery channel, 29 cents in the drug channel and 13 cents in the convenience channel, with prices being lower in states with MPLs for all three channels. The findings that MPLs do not raise cigarette prices could be the result of a lack of compliance and enforcement by the state or could be attributed to the minimum state mark-up being lower than the free-market mark-up for cigarettes. Rather than require a minimum mark-up, which can be nullified by promotional incentives and discounts, states and countries could strengthen MPLs by setting a simple 'floor price' that is the true minimum price for all cigarettes or could prohibit discounts to consumers and retailers.
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.; Cañ ette, Isabel
2011-01-01
to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article
Sparse EEG/MEG source estimation via a group lasso.
Directory of Open Access Journals (Sweden)
Michael Lim
Full Text Available Non-invasive recordings of human brain activity through electroencephalography (EEG or magnetoencelphalography (MEG are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches.
The minimum wage in the Czech enterprises
Directory of Open Access Journals (Sweden)
Eva Lajtkepová
2010-01-01
Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.
How unprecedented a solar minimum was it?
Russell, C T; Jian, L K; Luhmann, J G
2013-05-01
The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.
SphinX MEASUREMENTS OF THE 2009 SOLAR MINIMUM X-RAY EMISSION
Sylwester, J.; Kowalinski, M.; Gburek, S.; Siarkowski, M.; Kuzin, S.; Farnik, F.; Reale, F.; Phillips, K. J. H.; Bakala, J.; Gryciuk, M.; Podgorski, P.; Sylwester, B.
2012-01-01
The SphinX X-ray spectrophotometer on the CORONAS-PHOTON spacecraft measured soft X-ray emission in the 1-15 keV energy range during the deep solar minimum of 2009 with a sensitivity much greater than GOES. Several intervals are identified when the X-ray flux was exceptionally low, and the flux and solar X-ray luminosity are estimated. Spectral fits to the emission at these times give temperatures of 1.7-1.9 MK and emission measures between 4 x 10^47 cm^-3 and 1.1 x 10^48 cm^-3. Comparing Sph...
Searching for top, Higgs, and supersymmetry: the minimum invariant mass technique
International Nuclear Information System (INIS)
Berger, E.L.
1984-01-01
Supersymmetric particls, Higgs mesons, the top quark and other heavy objects are expected to decay frequently into three or more body final states in which at least one particle, such a neutrino or photino, is non-interacting. A method is described for obtaining an excellent estimate of both the mass and the longitudinal momentum of the parent state. The probable longitudinal momenta of the non-interacting particle and of the parent, and the minimum invariant mass of the parent are derived from a minimization procedure. The distributions in these variables are shown to peak sharply at their true values
Minimum-Cost Reachability for Priced Timed Automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...
Minimum Q Electrically Small Antennas
DEFF Research Database (Denmark)
Kim, O. S.
2012-01-01
Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....
Stochastic variational approach to minimum uncertainty states
Energy Technology Data Exchange (ETDEWEB)
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
The Application of Artificial Neural Networks to Ore Reserve Estimation at Choghart Iron Ore Deposit
Directory of Open Access Journals (Sweden)
Seyyed Ali Nezamolhosseini
2017-01-01
Full Text Available Geo-statistical methods for reserve estimation are difficult to use when stationary conditions are not satisfied. Artificial Neural Networks (ANNs provide an alternative to geo-statistical techniques while considerably reducing the processing time required for development and application. In this paper the ANNs was applied to the Choghart iron ore deposit in Yazd province of Iran. Initially, an optimum Multi Layer Perceptron (MLP was constructed to estimate the Fe grade within orebody using the whole ore data of the deposit. Sensitivity analysis was applied for a number of hidden layers and neurons, different types of activation functions and learning rules. Optimal architectures for iron grade estimation were 3-20-10-1. In order to improve the network performance, the deposit was divided into four homogenous zones. Subsequently, all sensitivity analyses were carried out on each zone. Finally, a different optimum network was trained and Fe was estimated separately for each zone. Comparison of correlation coefficient (R and least mean squared error (MSE showed that the ANNs performed on four homogenous zones were far better than the nets applied to the overall ore body. Therefore, these optimized neural networks were used to estimate the distribution of iron grades and the iron resource in Choghart deposit. As a result of applying ANNs, the tonnage of ore for Choghart deposit is approximately estimated at 135.8 million tones with average grade of Fe at 56.14 percent. Results of reserve estimation using ANNs showed a good agreement with the geo-statistical methods applied to this ore body in another work.
Minimum entropy production principle
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel
2013-01-01
Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle
Combining within and between instrument information to estimate precision
International Nuclear Information System (INIS)
Jost, J.W.; Devary, J.L.; Ward, J.E.
1980-01-01
When two instruments, both having replicated measurements, are used to measure the same set of items, between instrument information may be used to augment the within instrument precision estimate. A method is presented which combines the within and between instrument information to obtain an unbiased and minimum variance estimate of instrument precision. The method does not assume the instruments have equal precision
Improved Mobility Performance in LTE Co-Channel HetNets Through Speed Differentiated Enhancements
DEFF Research Database (Denmark)
Barbera, Simone; Michaelsen, Per Henrik; Säily, Mikko
2012-01-01
, requiring minimum assistance and signaling from the network. Extensive system level simulations are used to quantify the benefits. Results confirm that the proposed solutions offer improvements in several mobility key performance indicators such as radio link failure, number of handovers, offload to pico......This paper analyzes the mobility performance of LTE (Long Term Evolution) co-channel heterogeneous networks (HetNet) with macro and pico cells. Improved methods for differentiating offload and mobility robustness as a function of the UE (User Equipment) mobility are proposed. The suggested solution...... comprises two key elements, namely enhanced UE MSE (Mobility State Estimation), as well as optimized methods such that high speed users are primarily kept at the macro layer, while the offload to pico cells for low speed users is maximized. The proposed methods are designed as UE autonomous solutions...
Chalmers, Jenny; Carragher, Natacha; Davoren, Sondra; O'Brien, Paula
2013-11-01
A burgeoning body of empirical evidence demonstrates that increases in the price of alcohol can reduce per capita alcohol consumption and harmful drinking. Taxes on alcohol can be raised to increase prices, but this strategy can be undermined if the industry absorbs the tax increase and cross-subsidises the price of one alcoholic beverage with other products. Such loss-leading strategies are not possible with minimum pricing. We argue that a minimum (or floor) price for alcohol should be used as a complement to alcohol taxation. Several jurisdictions have already introduced minimum pricing (e.g., Canada, Ukraine) and others are currently investigating pathways to introduce a floor price (e.g., Scotland). Tasked by the Australian government to examine the public interest case for a minimum price, Australia's peak preventative health agency recommended against setting one at the present time. The agency was concerned that there was insufficient Australian specific modelling evidence to make robust estimates of the net benefits. Nonetheless, its initial judgement was that it would be difficult for a minimum price to produce benefits for Australia at the national level. Whilst modelling evidence is certainly warranted to support the introduction of the policy, the development and uptake of policy is influenced by more than just empirical evidence. This article considers three potential impediments to minimum pricing: public opinion and misunderstandings or misgivings about the operation of a minimum price; the strength of alcohol industry objections and measures to undercut the minimum price through discounts and promotions; and legal obstacles including competition and trade law. The analysis of these factors is situated in an Australian context, but has salience internationally. Copyright © 2013 Elsevier B.V. All rights reserved.
Sensor Placement for Modal Parameter Subset Estimation
DEFF Research Database (Denmark)
Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars
2016-01-01
The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...
Kerwin, Diana R.; Zhang, Yinghua; Kotchen, Jane Morley; Espeland, Mark A.; Van Horn, Linda; McTigue, Kathleen M.; Robinson, Jennifer G.; Powell, Lynda; Kooperberg, Charles; Coker, Laura H.; Hoffmann, Raymond
2010-01-01
OBJECTIVES To determine if body weight (BMI) is independently associated with cognitive function in postmenopausal women and the relationship between body fat distribution as estimated by waist-hip-ratio (WHR) and cognitive function. DESIGN Cross-sectional data analysis SETTING Baseline data from the Women's Health Initiative (WHI) hormone trials. PARTICIPANTS 8745 postmenopausal women aged 65–79 years, free of clinical evidence of dementia and completed baseline evaluation in the Women's Health Initiative (WHI) hormone trials. MEASUREMENTS Participants completed a Modified Mini-Mental State Examination (3MSE), health and lifestyle questionnaires, and standardized measurements of height, weight, body circumferences and blood pressure. Statistical analysis of associations between 3MSE scores, BMI and WHR after controlling for known confounders. RESULTS With the exception of smoking and exercise, vascular disease risk factors, including hypertension, waist measurement, heart disease and diabetes, were significantly associated with 3MSE score and were included as co-variables in subsequent analyses. BMI was inversely related to 3MSE scores, for every 1 unit increase in BMI, 3MSE decrease 0.988 (p=.0001) after adjusting for age, education and vascular disease risk factors. BMI had the most pronounced association with poorer cognitive functioning scores among women with smaller waist measurements. Among women with the highest WHR, cognitive scores increased with BMI. CONCLUSION Increasing BMI is associated with poorer cognitive function in women with smaller WHR. Higher WHR, estimating central fat mass, is associated with higher cognitive function in this cross-sectional study. Further research is needed to clarify the mechanism for this association. PMID:20646100
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
Minimum emittance in TBA and MBA lattices
International Nuclear Information System (INIS)
Xu Gang; Peng Yuemei
2015-01-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)
41 CFR 50-202.2 - Minimum wage in all industries.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wage in all... Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 202-MINIMUM WAGE DETERMINATIONS Groups of Industries § 50-202.2 Minimum wage in all industries. In all industries, the minimum wage applicable to...
Characterization of a qubit Hamiltonian using adaptive measurements in a fixed basis
International Nuclear Information System (INIS)
Sergeevich, Alexandr; Bartlett, Stephen D.; Chandran, Anushya; Combes, Joshua; Wiseman, Howard M.
2011-01-01
We investigate schemes for Hamiltonian parameter estimation of a two-level system using repeated measurements in a fixed basis. The simplest (Fourier based) schemes yield an estimate with a mean-square error (MSE) that decreases at best as a power law ∼N -2 in the number of measurements N. By contrast, we present numerical simulations indicating that an adaptive Bayesian algorithm, where the time between measurements can be adjusted based on prior measurement results, yields a MSE which appears to scale close to exp(-0.3N). That is, measurements in a single fixed basis are sufficient to achieve exponential scaling in N.
SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS
Directory of Open Access Journals (Sweden)
Cira E. Guevara Otiniano
2016-06-01
Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.
29 CFR 525.13 - Renewal of special minimum wage certificates.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Renewal of special minimum wage certificates. 525.13... minimum wage certificates. (a) Applications may be filed for renewal of special minimum wage certificates.... (c) Workers with disabilities may not continue to be paid special minimum wages after notice that an...
An Empirical Analysis of the Relationship between Minimum Wage ...
African Journals Online (AJOL)
An Empirical Analysis of the Relationship between Minimum Wage, Investment and Economic Growth in Ghana. ... In addition, the ratio of public investment to tax revenue must increase as minimum wage increases since such complementary changes are more likely to lead to economic growth. Keywords: minimum wage ...
12 CFR 3.6 - Minimum capital ratios.
2010-01-01
... should have well-diversified risks, including no undue interest rate risk exposure; excellent control... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Minimum capital ratios. 3.6 Section 3.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE...
12 CFR 615.5330 - Minimum surplus ratios.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Minimum surplus ratios. 615.5330 Section 615.5330 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Surplus and Collateral Requirements § 615.5330 Minimum...
Evaluation of the minimum iodine concentration for contrast-enhanced subtraction mammography
International Nuclear Information System (INIS)
Baldelli, P; Bravin, A; Maggio, C Di; Gennaro, G; Sarnelli, A; Taibi, A; Gambaccini, M
2006-01-01
Early manifestation of breast cancer is often very subtle and is displayed in a complex and variable pattern of normal anatomy that may obscure the disease. The use of dual-energy techniques, that can remove the structural noise, and contrast media, that enhance the region surrounding the tumour, could help us to improve the detectability of the lesions. The aim of this work is to investigate the use of an iodine-based contrast medium in mammography with two different double exposure techniques: K-edge subtraction mammography and temporal subtraction mammography. Both techniques have been investigated by using an ideal source, like monochromatic beams produced at a synchrotron radiation facility and a clinical digital mammography system. A dedicated three-component phantom containing cavities filled with different iodine concentrations has been developed and used for measurements. For each technique, information about the minimum iodine concentration, which provides a significant enhancement of the detectability of the pathology by minimizing the risk due to high dose and high concentration of contrast medium, has been obtained. In particular, for cavities of 5 and 8 mm in diameter filled with iodine solutions, the minimum concentration needed to obtain a contrast-to-noise ratio of 5 with a mean glandular dose of 2 mGy has been calculated. The minimum concentrations estimated with monochromatic beams and K-edge subtraction mammography are 0.9 mg ml -1 and 1.34 mg ml -1 for the biggest and smallest details, respectively, while for temporal subtraction mammography they are 0.84 mg ml -1 and 1.31 mg ml -1 . With the conventional clinical system the minimum concentrations for the K-edge subtraction mammography are 4.13 mg ml -1 (8 mm diameter) and 5.75 mg ml -1 (5 mm diameter), while for the temporal subtraction mammography they are 1.01 mg ml -1 (8 mm diameter) and 1.57 mg ml -1 (5 mm diameter)
5 CFR 551.601 - Minimum age standards.
2010-01-01
... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year... subject to its child labor provisions, with certain exceptions not applicable here. (b) 18-year minimum... occupation found and declared by the Secretary of Labor to be particularly hazardous for the employment of...
12 CFR 932.8 - Minimum liquidity requirements.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum liquidity requirements. 932.8 Section... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.8 Minimum liquidity requirements. In addition to meeting the deposit liquidity requirements contained in § 965.3 of this chapter, each Bank...
The use of Leptodyctium riparium (Hedw.) Warnst in the estimation of minimum postmortem interval.
Lancia, Massimo; Conforti, Federica; Aleffi, Michele; Caccianiga, Marco; Bacci, Mauro; Rossi, Riccardo
2013-01-01
The estimation of the postmortem interval (PMI) is still one of the most challenging issues in forensic investigations, especially in cases in which advanced transformative phenomena have taken place. The dating of skeletal remains is even more difficult and sometimes only a rough determination of the PMI is possible. Recent studies suggest that plant analysis can provide a reliable estimation for skeletal remains dating, when traditional techniques are not applicable. Forensic Botany is a relatively recent discipline that includes many subdisciplines such as Palynology, Anatomy, Dendrochronology, Limnology, Systematic, Ecology, and Molecular Biology. In a recent study, Cardoso et al. (Int J Legal Med 2010;124:451) used botanical evidence for the first time to establish the PMI of human skeletal remains found in a forested area of northern Portugal from the growth rate of mosses and shrub roots. The present paper deals with a case in which the study of the growth rate of the bryophyte Leptodyctium riparium (Hedw.) Warnst, was used in estimating the PMI of some human skeletal remains that were found in a wooded area near Perugia, in Central Italy. © 2012 American Academy of Forensic Sciences.
[Hospitals failing minimum volumes in 2004: reasons and consequences].
Geraedts, M; Kühnen, C; Cruppé, W de; Blum, K; Ohmann, C
2008-02-01
In 2004 Germany introduced annual minimum volumes nationwide on five surgical procedures: kidney, liver, stem cell transplantation, complex oesophageal, and pancreatic interventions. Hospitals that fail to reach the minimum volumes are no longer allowed to perform the respective procedures unless they raise one of eight legally accepted exceptions. The goal of our study was to investigate how many hospitals fell short of the minimum volumes in 2004, whether and how this was justified, and whether hospitals that failed the requirements experienced any consequences. We analysed data on meeting the minimum volume requirements in 2004 that all German hospitals were obliged to publish as part of their biannual structured quality reports. We performed telephone interviews: a) with all hospitals not achieving the minimum volumes for complex oesophageal, and pancreatic interventions, and b) with the national umbrella organisations of all German sickness funds. In 2004, one quarter of all German acute care hospitals (N=485) performed 23,128 procedures where minimum volumes applied. 197 hospitals (41%) did not meet at least one of the minimum volumes. These hospitals performed N=715 procedures (3.1%) where the minimum volumes were not met. In 43% of these cases the hospitals raised legally accepted exceptions. In 33% of the cases the hospitals argued using reasons that were not legally acknowledged. 69% of those hospitals that failed to achieve the minimum volumes for complex oesophageal and pancreatic interventions did not experience any consequences from the sickness funds. However, one third of those hospitals reported that the sickness funds addressed the issue and partially announced consequences for the future. The sickness funds' umbrella organisations stated that there were only sparse activities related to the minimum volumes and that neither uniform registrations nor uniform proceedings in case of infringements of the standards had been agreed upon. In spite of the
Estimation of the simple correlation coefficient.
Shieh, Gwowen
2010-11-01
This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.
24 CFR 891.145 - Owner deposit (Minimum Capital Investment).
2010-04-01
... General Program Requirements § 891.145 Owner deposit (Minimum Capital Investment). As a Minimum Capital... Investment shall be one-half of one percent (0.5%) of the HUD-approved capital advance, not to exceed $25,000. ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Owner deposit (Minimum Capital...
Directory of Open Access Journals (Sweden)
Christelle Garnier
2008-05-01
Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.
Electrical resisitivity of mechancially stablized earth wall backfill
Snapp, Michael; Tucker-Kulesza, Stacey; Koehn, Weston
2017-06-01
Mechanically stabilized earth (MSE) retaining walls utilized in transportation projects are typically backfilled with coarse aggregate. One of the current testing procedures to select backfill material for construction of MSE walls is the American Association of State Highway and Transportation Officials standard T 288: ;Standard Method of Test for Determining Minimum Laboratory Soil Resistivity.; T 288 is designed to test a soil sample's electrical resistivity which correlates to its corrosive potential. The test is run on soil material passing the No. 10 sieve and believed to be inappropriate for coarse aggregate. Therefore, researchers have proposed new methods to measure the electrical resistivity of coarse aggregate samples in the laboratory. There is a need to verify that the proposed methods yield results representative of the in situ conditions; however, no in situ measurement of the electrical resistivity of MSE wall backfill is established. Electrical resistivity tomography (ERT) provides a two-dimensional (2D) profile of the bulk resistivity of backfill material in situ. The objective of this study was to characterize bulk resistivity of in-place MSE wall backfill aggregate using ERT. Five MSE walls were tested via ERT to determine the bulk resistivity of the backfill. Three of the walls were reinforced with polymeric geogrid, one wall was reinforced with metallic strips, and one wall was a gravity retaining wall with no reinforcement. Variability of the measured resistivity distribution within the backfill may be a result of non-uniform particle sizes, thoroughness of compaction, and the presence of water. A quantitative post processing algorithm was developed to calculate mean bulk resistivity of in-situ backfill. Recommendations of the study were that the ERT data be used to verify proposed testing methods for coarse aggregate that are designed to yield data representative of in situ conditions. A preliminary analysis suggests that ERT may be utilized
Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin
2008-01-01
http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...
A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems
International Nuclear Information System (INIS)
Dittrich, R.; Schrefl, T.; Suess, D.; Scholz, W.; Forster, H.; Fidler, J.
2002-01-01
Minimum energy paths and energy barriers are calculated for complex micromagnetic systems. The method is based on the nudged elastic band method and uses finite-element techniques to represent granular structures. The method was found to be robust and fast for both simple test problems as well as for large systems such as patterned granular media. The method is used to estimate the energy barriers in CoCr-based perpendicular recording media
Minimum Wages and the Distribution of Family Incomes
Dube, Arindrajit
2017-01-01
Using the March Current Population Survey data from 1984 to 2013, I provide a comprehensive evaluation of how minimum wage policies influence the distribution of family incomes. I find robust evidence that higher minimum wages shift down the cumulative distribution of family incomes at the bottom, reducing the share of non-elderly individuals with incomes below 50, 75, 100, and 125 percent of the federal poverty threshold. The long run (3 or more years) minimum wage elasticity of the non-elde...
Kumar, Mohan; Gautam, Manish Kumar; Singh, Amit; Goel, Raj Kumar
2013-11-05
The present study evaluates the effects of extract of Musa sapientum fruit (MSE) on ulcer index, blood glucose level and gastric mucosal cytokines, TNF-α and IL-1β and growth factor, TGF-α (affected in diabetes and chronic ulcer) in acetic acid (AA)-induced gastric ulcer (GU) in diabetic (DR) rat. MSE (100 mg/kg, oral), omeprazole (OMZ, 2.0 mg/kg, oral), insulin (INS, 4 U/kg, sc) or pentoxyphylline (PTX, 10 mg/kg, oral) were given once daily for 10 days in 14 days post-streptozotocin (60 mg/kg, intraperitoneal)-induced diabetic rats while, the normal/diabetic rats received CMC for the same period after induction of GU with AA. Ulcer index was calculated based upon the product of length and width (mm2/rat) of ulcers while, TNF-α, IL-1β and TGF-α were estimated in the gastric mucosal homogenate from the intact/ulcer region. Phytochemical screening and HPTLC analysis of MSE was done following standard procedures. An increase in ulcer index, TNF-α and IL-1β were observed in normal (NR)-AA rat compared to NR-normal saline rat, which were further increased in DR-AA rat while, treatments of DR-AA rat with MSE, OMZ, INS and PTX reversed them, more so with MSE and PTX. Significant increase in TGF-α was found in NR-AA rat which did not increase further in DR-AA rat. MSE and PTX tended to increase while, OMZ and INS showed little or no effect on TGF-α in AA-DR rat. Phytochemical screening of MSE showed the presence of saponins, flavonoids, glycosides, steroids and alkaloids and HPTLC analysis indicated the presence of eight active compounds. MSE showed antidiabetic and better ulcer healing effects compared with OMZ (antiulcer) or INS (antidiabetic) in diabetic rat and could be more effective in diabetes with concurrent gastric ulcer.
7 CFR 1610.5 - Minimum Bank loan.
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Minimum Bank loan. 1610.5 Section 1610.5 Agriculture Regulations of the Department of Agriculture (Continued) RURAL TELEPHONE BANK, DEPARTMENT OF AGRICULTURE LOAN POLICIES § 1610.5 Minimum Bank loan. A Bank loan will not be made unless the applicant qualifies for a Bank...
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
29 CFR 783.43 - Computation of seaman's minimum wage.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Computation of seaman's minimum wage. 783.43 Section 783.43...'s minimum wage. Section 6(b) requires, under paragraph (2) of the subsection, that an employee...'s minimum wage requirements by reason of the 1961 Amendments (see §§ 783.23 and 783.26). Although...
12 CFR 931.3 - Minimum investment in capital stock.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum investment in capital stock. 931.3... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.3 Minimum investment in capital stock. (a) A Bank shall require each member to maintain a minimum investment in the capital stock of the Bank, both...
Heavy Drinkers and the Potential Impact of Minimum Unit Pricing-No Single or Simple Effect?
Gill, J; Black, H; Rush, R; O'May, F; Chick, J
2017-11-01
To explore the potential impact of a minimum unit price (MUP: 50 pence per UK unit) on the alcohol consumption of ill Scottish heavy drinkers. Participants were 639 patients attending alcohol treatment services or admitted to hospital with an alcohol-related condition. From their reported expenditure on alcohol in their index week, and assuming this remained unchanged, we estimated the impact of a MUP (50 ppu) on future consumption. (Around 15% purchased from both the more expensive on-sale outlets (hotels, pubs, bars) and from off-sales (shops and supermarkets). For them we estimated the change in consumption that might follow MUP if (i) they continued this proportion of 'on-sales' purchasing or (ii) their reported expenditure was moved entirely to off-sale purchasing (to maintain consumption levels)). Around 69% of drinkers purchased exclusively off-sale alcohol at sales purchases could support, for some, an increase in consumption. While a proportion of our harmed, heavy drinkers might be able to mitigate the impact of MUP by changing purchasing habits, the majority are predicted to reduce purchasing. This analysis, focusing specifically on harmed drinkers, adds a unique dimension to the evidence base informing current pricing policy. From drink purchasing data of heavy drinkers, we estimated the impact of legislating £0.50 minimum unit price. Over two thirds of drinkers, representing all multiple deprivation quintiles, were predicted to decrease alcohol purchasing; remainder, hypothetically, could maintain consumption. Our data address an important gap within the evidence base informing policy. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.
Minimum-Cost Reachability for Priced Timed Automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...... and a new notion of priced regions. The latter allows symbolic representation and manipulation of reachable states together with the cost of reaching them....
Newell, Felicia D; Williams, Patricia L; Watt, Cynthia G
2014-05-09
This paper aims to assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia (NS) from 2002 to 2012 using an economic simulation that includes food costing and secondary data. The cost of the National Nutritious Food Basket (NNFB) was assessed with a stratified, random sample of grocery stores in NS during six time periods: 2002, 2004/2005, 2007, 2008, 2010 and 2012. The NNFB's cost was factored into affordability scenarios for three different household types relying on minimum wage earnings: a household of four; a lone mother with three children; and a lone man. Essential monthly living expenses were deducted from monthly net incomes using methods that were standardized from 2002 to 2012 to determine whether adequate funds remained to purchase a basic nutritious diet across the six time periods. A 79% increase to the minimum wage in NS has resulted in a decrease in the potential deficit faced by each household scenario in the period examined. However, the household of four and the lone mother with three children would still face monthly deficits ($44.89 and $496.77, respectively, in 2012) if they were to purchase a nutritiously sufficient diet. As a social determinant of health, risk of food insecurity is a critical public health issue for low wage earners. While it is essential to increase the minimum wage in the short term, adequately addressing income adequacy in NS and elsewhere requires a shift in thinking from a focus on minimum wage towards more comprehensive policies ensuring an adequate livable income for everyone.
Zero forcing parameters and minimum rank problems
Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.
2010-01-01
The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero
Minimum bias measurement at 13 TeV
Orlando, Nicola; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes and to simulate the environment at the LHC with many concurrent pp interactions (pile-up). We summarise the ATLAS minimum bias measurements with proton-proton collision at 13 TeV center-of-mass-energy at the Large Hadron Collider.
Minimum airflow reset of single-duct VAV terminal boxes
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and
Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha
2017-07-01
After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.
Minimum wall pressure coefficient of orifice plate energy dissipater
Directory of Open Access Journals (Sweden)
Wan-zheng Ai
2015-01-01
Full Text Available Orifice plate energy dissipaters have been successfully used in large-scale hydropower projects due to their simple structure, convenient construction procedure, and high energy dissipation ratio. The minimum wall pressure coefficient of an orifice plate can indirectly reflect its cavitation characteristics: the lower the minimum wall pressure coefficient is, the better the ability of the orifice plate to resist cavitation damage is. Thus, it is important to study the minimum wall pressure coefficient of the orifice plate. In this study, this coefficient and related parameters, such as the contraction ratio, defined as the ratio of the orifice plate diameter to the flood-discharging tunnel diameter; the relative thickness, defined as the ratio of the orifice plate thickness to the tunnel diameter; and the Reynolds number of the flow through the orifice plate, were theoretically analyzed, and their relationships were obtained through physical model experiments. It can be concluded that the minimum wall pressure coefficient is mainly dominated by the contraction ratio and relative thickness. The lower the contraction ratio and relative thickness are, the larger the minimum wall pressure coefficient is. The effects of the Reynolds number on the minimum wall pressure coefficient can be neglected when it is larger than 105. An empirical expression was presented to calculate the minimum wall pressure coefficient in this study.
76 FR 15368 - Minimum Security Devices and Procedures
2011-03-21
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures... concerning the following information collection. Title of Proposal: Minimum Security Devices and Procedures... security devices and procedures to discourage robberies, burglaries, and larcenies, and to assist in the...
International Nuclear Information System (INIS)
Tyson, Jon
2009-01-01
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
Pantoja, Silvio; Sepúlveda, Julio; González, Humberto E.
2004-01-01
We investigated the fate of sinking proteinaceous material in the oxygen minimum zone off northern Chile by deploying sediment traps at 30 m (base of the oxygenated layer) and 300 m (bottom of the O 2-depleted layer) during a 3-day experiment. Most of photosynthetically produced protein (82%) degraded in the top 30 m; an additional 15% decayed between 30 and 300 m, within the suboxic zone; and ca. 1% reached surface sediments at 1200 m depth. Sinking protein remained diagenetically labile in the top 300-m of the water column, as indicated by degradation indices and degradation rate constants of trap material, both characteristic of fresh material. We conclude that particulate protein degradation is not affected by the occurrence of the suboxic layer between 30 and 300 m in the water column. This conclusion is consistent with a model of degradation of particulate protein controlled by extracellular enzymatic hydrolysis and not dependent on O 2 availability. Assuming that our fall results are representative for an annual cycle and the whole oxygen minimum zone, suboxic decay of sinking protein in the oxygen minimum zone could support a production of 2 Tg N 2 yr -1, consistent with independent estimates of denitrification rates in the area.
Default Bayesian Estimation of the Fundamental Frequency
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Joint fundamental frequency and model order esti- mation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real....... Moreover, several approximations of the posterior distributions on the fundamental frequency and the model order are derived, and one of the state-of-the-art joint fundamental frequency and model order estimators is demonstrated to be a special case of one of these approximations. The performance...
Computation-aware algorithm selection approach for interlaced-to-progressive conversion
Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang
2010-05-01
We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.
76 FR 30243 - Minimum Security Devices and Procedures
2011-05-24
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures.... Title of Proposal: Minimum Security Devices and Procedures. OMB Number: 1550-0062. Form Number: N/A... respect to the installation, maintenance, and operation of security devices and procedures to discourage...
Does increasing the minimum wage reduce poverty in developing countries?
Gindling, T. H.
2014-01-01
Do minimum wage policies reduce poverty in developing countries? It depends. Raising the minimum wage could increase or decrease poverty, depending on labor market characteristics. Minimum wages target formal sector workers—a minority of workers in most developing countries—many of whom do not live in poor households. Whether raising minimum wages reduces poverty depends not only on whether formal sector workers lose jobs as a result, but also on whether low-wage workers live in poor househol...
Directory of Open Access Journals (Sweden)
S. P. Arunachalam
2018-01-01
Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.
Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo
2010-09-15
Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.
International Nuclear Information System (INIS)
Shen Suhung; Leptoukh, Gregory G
2011-01-01
Surface air temperature (T a ) is a critical variable in the energy and water cycle of the Earth–atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T a from satellite remotely sensed land surface temperature (T s ) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T a and MODIS T s . The relationships between the maximum T a and daytime T s depend significantly on land cover types, but the minimum T a and nighttime T s have little dependence on the land cover types. The largest difference between maximum T a and daytime T s appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T a were estimated from 1 km resolution MODIS T s under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T a were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T a varies from 2.4 °C over closed shrublands to 3.2 °C over grasslands, and the MAE of the estimated minimum T a is about 3.0 °C.
Akibat Hukum Bagi Bank Bila Kewajiban Modal Inti Minimum Tidak Terpenuhi
Directory of Open Access Journals (Sweden)
Indira Retno Aryatie
2012-02-01
Full Text Available As an implementation of the Indonesian Banking Architecture policy, the government issues Bank Indonesia Regulation No. 9/16/ PBI/2007 on Minimum Tier One Capital that increases the minimum capital to 100 billion rupiah. This writing discusses the legal complication that a bank will face should it fail to fulfil the minimum ratio. Sebagai tindak lanjut dari kebijakan Arsitektur Perbankan Indonesia, pemerintah mengeluarkan Peraturan Bank Indonesia 9/16/PBI/2007 tentang Kewajiban Modal Inti Minimum Bank yang menaikkan modal inti minimum bank umum menjadi 100 miliar rupiah. Tulisan ini membahas akibat hukum yang akan dialami bank bila kewajiban modal minimum tersebut gagal dipenuhi.
Risk control and the minimum significant risk
International Nuclear Information System (INIS)
Seiler, F.A.; Alvarez, J.L.
1996-01-01
Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented
Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm
Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.
2016-05-01
In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.
42 CFR 84.197 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.197... Cartridge Respirators § 84.197 Respirator containers; minimum requirements. Respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...
42 CFR 84.174 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.174... Air-Purifying Particulate Respirators § 84.174 Respirator containers; minimum requirements. (a) Except..., durable container bearing markings which show the applicant's name, the type of respirator it contains...
Design of a minimum emittance nBA lattice
Lee, S. Y.
1998-04-01
An attempt to design a minimum emittance n-bend achromat (nBA) lattice has been made. One distinct feature is that dipoles with two different lengths were used. As a multiple bend achromat, five bend achromat lattices with six superperiod were designed. The obtained emittace is three times larger than the theoretical minimum. Tunes were chosen to avoid third order resonances. In order to correct first and second order chromaticities, eight family sextupoles were placed. The obtained emittance of five bend achromat lattices is almost equal to the minimum emittance of five bend achromat lattice consisting of dipoles with equal length.
Heuristic introduction to estimation methods
International Nuclear Information System (INIS)
Feeley, J.J.; Griffith, J.M.
1982-08-01
The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Minimum resolvable power contrast model
Qian, Shuai; Wang, Xia; Zhou, Jingjing
2018-01-01
Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.
42 CFR 84.134 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.134... Respirators § 84.134 Respirator containers; minimum requirements. Supplied-air respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...
42 CFR 84.1134 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84... Combination Gas Masks § 84.1134 Respirator containers; minimum requirements. (a) Except as provided in paragraph (b) of this section each respirator shall be equipped with a substantial, durable container...
42 CFR 84.74 - Apparatus containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Apparatus containers; minimum requirements. 84.74...-Contained Breathing Apparatus § 84.74 Apparatus containers; minimum requirements. (a) Apparatus may be equipped with a substantial, durable container bearing markings which show the applicant's name, the type...
Estimates and sampling schemes for the instrumentation of accountability systems
International Nuclear Information System (INIS)
Jewell, W.S.; Kwiatkowski, J.W.
1976-10-01
The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered
A minimum bit error-rate detector for amplify and forward relaying systems
Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim; Aissa, Sonia
2012-01-01
In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.
A minimum bit error-rate detector for amplify and forward relaying systems
Ahmed, Qasim Zeeshan
2012-05-01
In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.
Reducing tobacco use and access through strengthened minimum price laws.
McLaughlin, Ian; Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt
2014-10-01
Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry's discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price "markups" over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements.
Estimation methods of eco-environmental water requirements: Case study
Institute of Scientific and Technical Information of China (English)
YANG Zhifeng; CUI Baoshan; LIU Jingling
2005-01-01
Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.
12 CFR 567.2 - Minimum regulatory capital requirement.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum regulatory capital requirement. 567.2... Regulatory Capital Requirements § 567.2 Minimum regulatory capital requirement. (a) To meet its regulatory capital requirement a savings association must satisfy each of the following capital standards: (1) Risk...
29 CFR 525.24 - Advisory Committee on Special Minimum Wages.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Advisory Committee on Special Minimum Wages. 525.24 Section 525.24 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... Special Minimum Wages. The Advisory Committee on Special Minimum Wages, the members of which are appointed...
Statistical physics when the minimum temperature is not absolute zero
Chung, Won Sang; Hassanabadi, Hassan
2018-04-01
In this paper, the nonzero minimum temperature is considered based on the third law of thermodynamics and existence of the minimal momentum. From the assumption of nonzero positive minimum temperature in nature, we deform the definitions of some thermodynamical quantities and investigate nonzero minimum temperature correction to the well-known thermodynamical problems.
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
Directory of Open Access Journals (Sweden)
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
42 CFR 422.382 - Minimum net worth amount.
2010-10-01
... that CMS considers appropriate to reduce, control or eliminate start-up administrative costs. (b) After... section. (c) Calculation of the minimum net worth amount—(1) Cash requirement. (i) At the time of application, the organization must maintain at least $750,000 of the minimum net worth amount in cash or cash...
Minimum Competencies in Undergraduate Motor Development. Guidance Document
National Association for Sport and Physical Education, 2004
2004-01-01
The minimum competency guidelines in Motor Development described herein at the undergraduate level may be gained in one or more motor development course(s) or through other courses provided in an undergraduate curriculum. The minimum guidelines include: (1) Formulation of a developmental perspective; (2) Knowledge of changes in motor behavior…
Minimum Detectable Activity for Tomographic Gamma Scanning System
Energy Technology Data Exchange (ETDEWEB)
Venkataraman, Ram [Canberra Industries (AREVA BDNM), Meriden, CT (United States); Smith, Susan [Canberra Industries (AREVA BDNM), Meriden, CT (United States); Kirkpatrick, J. M. [Canberra Industries (AREVA BDNM), Meriden, CT (United States); Croft, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
For any radiation measurement system, it is useful to explore and establish the detection limits and a minimum detectable activity (MDA) for the radionuclides of interest, even if the system is to be used at far higher values. The MDA serves as an important figure of merit, and often a system is optimized and configured so that it can meet the MDA requirements of a measurement campaign. The non-destructive assay (NDA) systems based on gamma ray analysis are no exception and well established conventions, such the Currie method, exist for estimating the detection limits and the MDA. However, the Tomographic Gamma Scanning (TGS) technique poses some challenges for the estimation of detection limits and MDAs. The TGS combines high resolution gamma ray spectrometry (HRGS) with low spatial resolution image reconstruction techniques. In non-imaging gamma ray based NDA techniques measured counts in a full energy peak can be used to estimate the activity of a radionuclide, independently of other counting trials. However, in the case of the TGS each “view” is a full spectral grab (each a counting trial), and each scan consists of 150 spectral grabs in the transmission and emission scans per vertical layer of the item. The set of views in a complete scan are then used to solve for the radionuclide activities on a voxel by voxel basis, over 16 layers of a 10x10 voxel grid. Thus, the raw count data are not independent trials any more, but rather constitute input to a matrix solution for the emission image values at the various locations inside the item volume used in the reconstruction. So, the validity of the methods used to estimate MDA for an imaging technique such as TGS warrant a close scrutiny, because the pair-counting concept of Currie is not directly applicable. One can also raise questions as to whether the TGS, along with other image reconstruction techniques which heavily intertwine data, is a suitable method if one expects to measure samples whose activities
The Unusual Minimum of Cycle 23: Observations and Interpretation
Martens, Petrus C.; Nandy, D.; Munoz-Jaramillo, A.
2009-05-01
The current minimum of cycle 23 is unusual in its long duration, the very low level to which Total Solar Irradiance (TSI) has fallen, and the small flux of the open polar fields. The deep minimum of TSI seems to be related to an unprecedented dearth of polar faculae, and hence to the small amount of open flux. Based upon surface flux transport models it has been suggested that the causes of these phenomena may be an unusually vigorous meridional flow, or even a deviation from Joy's law resulting in smaller Joy angles than usual for emerging flux in cycle 23. There is also the possibility of a connection with the recently inferred emergence in polar regions of bipoles that systematically defy Hale's law. Much speculation has been going on as to the consequences of this exceptional minimum: are we entering another global minimum, is this the end of the 80 year period of exceptionally high solar activity, or is this just a statistical hiccup? Dynamo simulations are underway that may help answer this question. As an aside it must be mentioned that the current minimum of TSI puts an upper limit in the TSI input for global climate simulations during the Maunder minimum, and that a possible decrease in future solar activity will result in a very small but not insignificant reduction in the pace of global warming.
Minimum Wages and Skill Acquisition: Another Look at Schooling Effects.
Neumark, David; Wascher, William
2003-01-01
Examines the effects of minimum wage on schooling, seeking to reconcile some of the contradictory results in recent research using Current Population Survey data from the late 1970s through the 1980s. Findings point to negative effects of minimum wages on school enrollment, bolstering the findings of negative effects of minimum wages on enrollment…
14 CFR 91.155 - Basic VFR weather minimums.
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Basic VFR weather minimums. 91.155 Section...) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Flight Rules Visual Flight Rules § 91.155 Basic VFR weather minimums. (a) Except as provided in paragraph (b) of this section and...
Minimum Wages and Teen Employment: A Spatial Panel Approach
Charlene Kalenkoski; Donald Lacombe
2011-01-01
The authors employ spatial econometrics techniques and Annual Averages data from the U.S. Bureau of Labor Statistics for 1990-2004 to examine how changes in the minimum wage affect teen employment. Spatial econometrics techniques account for the fact that employment is correlated across states. Such correlation may exist if a change in the minimum wage in a state affects employment not only in its own state but also in other, neighboring states. The authors show that state minimum wages negat...
Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions
DEFF Research Database (Denmark)
Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper
2016-01-01
of the estimator is in speech enhancement algorithms, such as the Multi-channel Wiener Filter (MWF) and the Minimum Variance Distortionless Response (MVDR) beamformer. We evaluate these two algorithms in a speech dereverberation task and compare the performance obtained using the proposed and a competing PSD...... estimator. Instrumental performance measures indicate an advantage of the proposed estimator over the competing one. In a speech intelligibility test all algorithms significantly improved the word intelligibility score. While the results suggest a minor advantage of using the proposed PSD estimator...
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations.
Nowcasting daily minimum air and grass temperature
Savage, M. J.
2016-02-01
Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) <1 °C for the 2-h ahead nowcasts. Model 2 (also exponential), for which a constant model coefficient ( b = 2.2) was used, was usually slightly less accurate but still with RMSEs <1 °C. Use of model 3 (square root) yielded increased RMSEs for the 2-h ahead comparisons between nowcasted and measured daily minima air temperature, increasing to 1.4 °C for some sites. For all sites for all models, the comparisons for the 4-h ahead air temperature nowcasts generally yielded increased RMSEs, <2.1 °C. Comparisons for all model nowcasts of the daily grass
Comparison of volatility function technique for risk-neutral densities estimation
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Probabilistic estimates of US uranium supply
International Nuclear Information System (INIS)
Piepel, G.F.; Long, L.W.; McLaren, R.A.; Ford, C.E.
1981-02-01
This report develops and presents probabilistic estimates of total US uranium supply. The word supply is used in the broad sense that both uranium quantity and cost are of interest. Cost implies minimum acceptable selling price rather than market price. Specifically, four types of probability distributions are developed: (1) quantity of US uranium; (2) cost of US uranium; (3) quantity of US uranium available at or below a certain cost; and (4) cost of US uranium given a certain consumption. In this report, uranium refers to recoverable U 3 O 8 (endowment adjusted for mining recovery and milling losses) occurring in both reserve and potential deposits meeting minimum size requirements with minimum grade above 0.01%. Cost includes operating and capital costs, taxes, profit, and cost capital. This definition of cost is often used to better denote this meaning. This definition of cost is contrasted with forward costs, that exclude sunk costs, taxes, and return on investment. Consumption refers to uranium that has been used from the current time to any point in the future. Uranium quantity and consumption are expressed in short tons, while full recovery costs are stated in constant 1980 dollars per pound
Cannistraci, Carlo
2010-09-01
Motivation: Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. Methods: \\'Minimum Curvilinearity\\' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Results: Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. Conclusion: MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. © The Author(s) 2010. Published by Oxford University Press.
A note on minimum-variance theory and beyond
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
A note on minimum-variance theory and beyond
International Nuclear Information System (INIS)
Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello
2004-01-01
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons
Operative needs in HIV+ populations: An estimation for sub-Saharan Africa.
Cherewick, Megan L; Cherewick, Steven D; Kushner, Adam L
2017-05-01
In 2015, it was estimated that approximately 36.7 million people were living with HIV globally and approximately 25.5 million of those people were living in sub-Saharan Africa. Limitations in the availability and access to adequate operative care require policy and planning to enhance operative capacity. Data estimating the total number of persons living with HIV by country, sex, and age group were obtained from the Joint United Nations Programme on HIV/AIDS (UNAIDS) in 2015. Using minimum proposed surgical rates per 100,000 for 4, defined, sub-Saharan regions of Africa, country-specific and regional estimates were calculated. The total need and unmet need for operative procedures were estimated. A minimum of 1,539,138 operative procedures were needed in 2015 for the 25.5 million persons living with HIV in sub-Saharan Africa. In 2015, there was an unmet need of 908,513 operative cases in sub-Saharan Africa with the greatest unmet need in eastern sub-Saharan Africa (427,820) and western sub-Saharan Africa (325,026). Approximately 55.6% of the total need for operative cases is adult women, 38.4% are adult men, and 6.0% are among children under the age of 15. A minimum of 1.5 million operative procedures annually are required to meet the needs of persons living with HIV in sub-Saharan Africa. The unmet need for operative care is greatest in eastern and western sub-Saharan Africa and will require investments in personnel, infrastructure, facilities, supplies, and equipment. We highlight the need for global planning and investment in resources to meet targets of operative capacity. Copyright © 2016 Elsevier Inc. All rights reserved.
Shen, Suhung; Leptoukh, Gregory G.
2011-01-01
Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.
Split-plot fractional designs: Is minimum aberration enough?
DEFF Research Database (Denmark)
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
A step-up test procedure to find the minimum effective dose.
Wang, Weizhen; Peng, Jianan
2015-01-01
It is of great interest to find the minimum effective dose (MED) in dose-response studies. A sequence of decreasing null hypotheses to find the MED is formulated under the assumption of nondecreasing dose response means. A step-up multiple test procedure that controls the familywise error rate (FWER) is constructed based on the maximum likelihood estimators for the monotone normal means. When the MED is equal to one, the proposed test is uniformly more powerful than Hsu and Berger's test (1999). Also, a simulation study shows a substantial power improvement for the proposed test over four competitors. Three R-codes are provided in Supplemental Materials for this article. Go to the publishers online edition of Journal of Biopharmaceutical Statistics to view the files.
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
Institute of Scientific and Technical Information of China (English)
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
International Nuclear Information System (INIS)
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations. In this paper, it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed. For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates, it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered. However, the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included, because the total heat into the system of interest is not fixed. An irreversible Carnot cycle and an irreversible Brayton cycle are analysed. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed. (general)
Latitude and Power Characteristics of Solar Activity at the End of the Maunder Minimum
Ivanov, V. G.; Miletsky, E. V.
2017-12-01
Two important sources of information about sunspots in the Maunder minimum are the Spörer catalog (Spörer, 1889) and observations of the Paris observatory (Ribes and Nesme-Ribes, 1993), which cover in total the last quarter of the 17th and the first two decades of the 18th century. These data, in particular, contain information about sunspot latitudes. As we showed in (Ivanov et al., 2011; Ivanov and Miletsky, 2016), dispersions of sunspot latitude distributions are tightly related to sunspot indices, and we can estimate the level of solar activity in the past using a method which is not based on direct calculation of sunspots and weakly affected by loss of observational data. The latitude distributions of sunspots in the time of transition from the Maunder minimum to the regular regime of solar activity proved to be wide enough. It gives evidences in favor of, first, not very low cycle no.-3 (1712-1723) with the Wolf number in maximum W = 100 ± 50, and, second, nonzero activity in the maximum of cycle no.-4 (1700-1711) W = 60 ± 45. Therefore, the latitude distributions in the end of the Maunder minimum are in better agreement with the traditional Wolf numbers and new revisited indices of activity SN and GN (Clette et al., 2014; Svalgaard and Schatten, 2016) than with the GSN (Hoyt and Schatten, 1998); the latter provide much lower level of activity in this epoch.
Minimum Moduli in Von Neumann Algebras | Gopalraj | Quaestiones ...
African Journals Online (AJOL)
In this paper we answer a question raised in [12] in the affirmative, namely that the essential minimum modulus of an element in a von. Neumann algebra, relative to any norm closed two-sided ideal, is equal to the minimum modulus of the element perturbed by an element from the ideal. As a corollary of this result, we ...
Subjective well-being and minimum wages: Evidence from U.S. states.
Kuroki, Masanori
2018-02-01
This paper investigates whether increases in minimum wages are associated with higher life satisfaction by using monthly-level state minimum wages and individual-level data from the 2005-2010 Behavioral Risk Factor Surveillance System. The magnitude I find suggests that a 10% increase in the minimum wage is associated with a 0.03-point increase in life satisfaction for workers without a high school diploma, on a 4-point scale. Contrary to popular belief that higher minimum wages hurt business owners, I find little evidence that higher minimum wages lead to the loss of well-being among self-employed people. Copyright © 2017 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Kirchhof, K. [Universitaetsklinikum Dresden (Germany). Inst. und Poliklinik fuer Radiologische Diagnostik; Wohlgemuth, W.A.; Berlis, A. [Klinikum Augsburg (Germany). Klinik fuer Diagnostische Radiologie und Neuroradiologie
2010-12-15
Purpose: To estimate the minimum dose needed at follow-up cranial computed tomography (CCT) to reliably determine ventricular width in children with hydrocephalus. Materials and Methods: For the study, a phantom was created using the calvarium of an infant which was filled with gelatin and the shaped inner cones of two carrots serving as lateral ventricles. The phantom was scanned ten times with two multi-slice CTs (LightSpeed Ultra, GE, and Somatom Sensation, Siemens), using a tube current of 400, 350, 300, 250, 200, 150, and 100 mA, and a tube voltage of 140, 120, 100, and 80 kV. The width of both lateral ventricles was measured at 4 sites. The values derived from scans performed at 380 / 400 mA and 140 kV (LightSpeed/Somatom) served as a reference. Measurements scored 1 point if they did not differ by more than 0.5 mm from the reference values. Results: The radiation dose can be reduced from 61.0 mGy to 9.2 mGy (15.1 %) with LightSpeed and from 55.0 mGy to 8.0 mGy (14.6 %) with Somatom without impairing the reliability of ventricular width measurements. However, in the case of both scanners, certain combinations of tube voltage and current yielded less reliable measurements although the dose was higher and the pixel noise was lower. Conclusion: There is no single cut-off dose or setting for tube voltage and current which guarantees reliable ventricular width measurements with the least radiation exposure for both scanners. As a guideline, it is safe to use the standard protocols with a reduced tube current of 100 kV. (orig.)
Six months into Myanmar's minimum wage: Reflecting on progress ...
International Development Research Centre (IDRC) Digital Library (Canada)
2016-04-25
Apr 25, 2016 ... Participants examined recent results from an IDRC-funded enterprise survey, ... of a minimum wage, and how they have coped with the new situation.” ... Debate on the impact of minimum wages on employment continues ...
Do minimum wages reduce poverty? Evidence from Central America ...
International Development Research Centre (IDRC) Digital Library (Canada)
2010-12-16
Dec 16, 2010 ... Raising minimum wages has traditionally been considered a way to protect poor ... However, the effect of raising minimum wages remains an empirical question ... More than 70 of Vietnamese entrepreneurs choose to start a ...
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...
Coupling between minimum scattering antennas
DEFF Research Database (Denmark)
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
29 CFR 510.23 - Agricultural activities eligible for minimum wage phase-in.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Agricultural activities eligible for minimum wage phase-in..., DEPARTMENT OF LABOR REGULATIONS IMPLEMENTATION OF THE MINIMUM WAGE PROVISIONS OF THE 1989 AMENDMENTS TO THE... eligible for minimum wage phase-in. Agriculture activities eligible for an extended phase-in of the minimum...
Is a Minimum Wage an Appropriate Instrument for Redistribution?
A.A.F. Gerritsen (Aart); B. Jacobs (Bas)
2016-01-01
textabstractWe analyze the redistributional (dis)advantages of a minimum wage over income taxation in competitive labor markets, without imposing assumptions on the (in)efficiency of labor rationing. Compared to a distributionally equivalent tax change, a minimum-wage increase raises involuntary
A model for estimating the minimum number of offspring to sample in studies of reproductive success.
Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M
2011-01-01
Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.
The impact of the minimum wage on health.
Andreyeva, Elena; Ukert, Benjamin
2018-03-07
This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.
Setting a minimum age for juvenile justice jurisdiction in California.
S Barnert, Elizabeth; S Abrams, Laura; Maxson, Cheryl; Gase, Lauren; Soung, Patricia; Carroll, Paul; Bath, Eraka
2017-03-13
Purpose Despite the existence of minimum age laws for juvenile justice jurisdiction in 18 US states, California has no explicit law that protects children (i.e. youth less than 12 years old) from being processed in the juvenile justice system. In the absence of a minimum age law, California lags behind other states and international practice and standards. The paper aims to discuss these issues. Design/methodology/approach In this policy brief, academics across the University of California campuses examine current evidence, theory, and policy related to the minimum age of juvenile justice jurisdiction. Findings Existing evidence suggests that children lack the cognitive maturity to comprehend or benefit from formal juvenile justice processing, and diverting children from the system altogether is likely to be more beneficial for the child and for public safety. Research limitations/implications Based on current evidence and theory, the authors argue that minimum age legislation that protects children from contact with the juvenile justice system and treats them as children in need of services and support, rather than as delinquents or criminals, is an important policy goal for California and for other national and international jurisdictions lacking a minimum age law. Originality/value California has no law specifying a minimum age for juvenile justice jurisdiction, meaning that young children of any age can be processed in the juvenile justice system. This policy brief provides a rationale for a minimum age law in California and other states and jurisdictions without one.
Constrained Optimization of MIMO Training Sequences
Directory of Open Access Journals (Sweden)
Coon Justin P
2007-01-01
Full Text Available Multiple-input multiple-output (MIMO systems have shown a huge potential for increased spectral efficiency and throughput. With an increasing number of transmitting antennas comes the burden of providing training for channel estimation for coherent detection. In some special cases optimal, in the sense of mean-squared error (MSE, training sequences have been designed. However, in many practical systems it is not feasible to analytically find optimal solutions and numerical techniques must be used. In this paper, two systems (unique word (UW single carrier and OFDM with nulled subcarriers are considered and a method of designing near-optimal training sequences using nonlinear optimization techniques is proposed. In particular, interior-point (IP algorithms such as the barrier method are discussed. Although the two systems seem unrelated, the cost function, which is the MSE of the channel estimate, is shown to be effectively the same for each scenario. Also, additional constraints, such as peak-to-average power ratio (PAPR, are considered and shown to be easily included in the optimization process. Numerical examples illustrate the effectiveness of the designed training sequences, both in terms of MSE and bit-error rate (BER.
International Nuclear Information System (INIS)
Bavio, José; Marrón, Beatriz
2014-01-01
Quality of service (QoS) for internet traffic management requires good traffic models and good estimation of sharing network resource. A link of a network processes all traffic and it is designed with certain capacity C and buffer size B. A Generalized Markov Fluid model (GMFM), introduced by Marrón (2011), is assumed for the sources because describes in a versatile way the traffic, allows estimation based on traffic traces, and also consistent effective bandwidth estimation can be done. QoS, interpreted as buffer overflow probability, can be estimated for GMFM through the effective bandwidth estimation and solving the optimization problem presented in Courcoubetis (2002), the so call inf-sup formulas. In this work we implement a code to solve the inf-sup problem and other optimization related with it, that allow us to do traffic engineering in links of data networks to calculate both, minimum capacity required when QoS and buffer size are given or minimum buffer size required when QoS and capacity are given
Eigenvector of gravity gradient tensor for estimating fault dips considering fault type
Kusumoto, Shigekazu
2017-12-01
The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.
Accounting for access costs in validation of soil maps
Yang, Lin; Brus, Dick J.; Zhu, A.X.; Li, Xinming; Shi, Jingjing
2018-01-01
The quality of soil maps can best be estimated by collecting additional data at locations selected by probability sampling. These data can be used in design-based estimation of map quality measures such as the population mean of the squared prediction errors (MSE) for continuous soil maps and
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Alvares, Clayton Alcarde; Sentelhas, Paulo César; Stape, José Luiz
2017-09-01
Although Brazil is predominantly a tropical country, frosts are observed with relative high frequency in the Center-Southern states of the country, affecting mainly agriculture, forestry, and human activities. Therefore, information about the frost climatology is of high importance for planning of these activities. Based on that, the aims of the present study were to develop monthly meteorological (F MET) and agronomic (F AGR) frost day models, based on minimum shelter air temperature (T MN), in order to characterize the temporal and spatial frost days variability in Center-Southern Brazil. Daily minimum air temperature data from 244 weather stations distributed across the study area were used, being 195 for developing the models and 49 for validating them. Multivariate regression models were obtained to estimate the monthly T MN, once the frost day models were based on this variable. All T MN regression models were statistically significant (p Brazilian region are the first zoning of these variables for the country.
47 CFR 25.205 - Minimum angle of antenna elevation.
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Minimum angle of antenna elevation. 25.205 Section 25.205 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.205 Minimum angle of antenna elevation. (a) Earth station...
Van Dyke, Miriam E; Komro, Kelli A; Shah, Monica P; Livingston, Melvin D; Kramer, Michael R
2018-07-01
Despite substantial declines since the 1960's, heart disease remains the leading cause of death in the United States (US) and geographic disparities in heart disease mortality have grown. State-level socioeconomic factors might be important contributors to geographic differences in heart disease mortality. This study examined the association between state-level minimum wage increases above the federal minimum wage and heart disease death rates from 1980 to 2015 among 'working age' individuals aged 35-64 years in the US. Annual, inflation-adjusted state and federal minimum wage data were extracted from legal databases and annual state-level heart disease death rates were obtained from CDC Wonder. Although most minimum wage and health studies to date use conventional regression models, we employed marginal structural models to account for possible time-varying confounding. Quasi-experimental, marginal structural models accounting for state, year, and state × year fixed effects estimated the association between increases in the state-level minimum wage above the federal minimum wage and heart disease death rates. In models of 'working age' adults (35-64 years old), a $1 increase in the state-level minimum wage above the federal minimum wage was on average associated with ~6 fewer heart disease deaths per 100,000 (95% CI: -10.4, -1.99), or a state-level heart disease death rate that was 3.5% lower per year. In contrast, for older adults (65+ years old) a $1 increase was on average associated with a 1.1% lower state-level heart disease death rate per year (b = -28.9 per 100,000, 95% CI: -71.1, 13.3). State-level economic policies are important targets for population health research. Copyright © 2018 Elsevier Inc. All rights reserved.
An Experimental study on a Method of Computing Minimum flow rate
International Nuclear Information System (INIS)
Cho, Yeon Sik; Kim, Tae Hyun; Kim, Chang Hyun
2009-01-01
Many pump reliability problems in the Nuclear Power Plants (NPPs) are being attributed to the operation of the pump at flow rates well below its best efficiency point(BEP). Generally, the manufacturer and the user try to avert such problems by specifying a minimum flow, below which the pump should not be operated. Pump minimum flow usually involves two considerations. The first consideration is normally termed the 'thermal minimum flow', which is that flow required to prevent the fluid inside the pump from reaching saturation conditions. The other consideration is often referred to as 'mechanical minimum flow', which is that flow required to prevent mechanical damage. However, the criteria for specifying such a minimum flow are not clearly understood by all parties concerned. Also various factor and information for computing minimum flow are not easily available as considering for the pump manufacturer' proprietary. The objective of this study is to obtain experimental data for computing minimum flow rate and to understand the pump performances due to low flow operation. A test loop consisted of the pump to be used in NPPs, water tank, flow rate measurements and piping system with flow control devices was established for this study
International Nuclear Information System (INIS)
Sahan, Muhittin; Yakut, Emre
2016-01-01
In this study, an artificial neural network (ANN) model was used to estimate monthly average global solar radiation on a horizontal surface for selected 5 locations in Mediterranean region for period of 18 years (1993-2010). Meteorological and geographical data were taken from Turkish State Meteorological Service. The ANN architecture designed is a feed-forward back-propagation model with one-hidden layer containing 21 neurons with hyperbolic tangent sigmoid as the transfer function and one output layer utilized a linear transfer function (purelin). The training algorithm used in ANN model was the Levenberg Marquand back propagation algorith (trainlm). Results obtained from ANN model were compared with measured meteorological values by using statistical methods. A correlation coefficient of 97.97 (~98%) was obtained with root mean square error (RMSE) of 0.852 MJ/m 2 , mean square error (MSE) of 0.725 MJ/m 2 , mean absolute bias error (MABE) 10.659MJ/m 2 , and mean absolute percentage error (MAPE) of 4.8%. Results show good agreement between the estimated and measured values of global solar radiation. We suggest that the developed ANN model can be used to predict solar radiation another location and conditions
Parameterization of ion channeling half-angles and minimum yields
Energy Technology Data Exchange (ETDEWEB)
Doyle, Barney L.
2016-03-15
A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for 〈u v w〉 axes and [h k l] planes up to (5 5 5). The program is open source and available at (http://www.sandia.gov/pcnsc/departments/iba/ibatable.html).
Estimation of the global regularity of a multifractional Brownian motion
DEFF Research Database (Denmark)
Lebovits, Joachim; Podolskij, Mark
This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....
Genome-wide meta-analysis of myopia and hyperopia provides evidence for replication of 11 loci.
Directory of Open Access Journals (Sweden)
Claire L Simpson
Full Text Available Refractive error (RE is a complex, multifactorial disorder characterized by a mismatch between the optical power of the eye and its axial length that causes object images to be focused off the retina. The two major subtypes of RE are myopia (nearsightedness and hyperopia (farsightedness, which represent opposite ends of the distribution of the quantitative measure of spherical refraction. We performed a fixed effects meta-analysis of genome-wide association results of myopia and hyperopia from 9 studies of European-derived populations: AREDS, KORA, FES, OGP-Talana, MESA, RSI, RSII, RSIII and ERF. One genome-wide significant region was observed for myopia, corresponding to a previously identified myopia locus on 8q12 (p = 1.25×10(-8, which has been reported by Kiefer et al. as significantly associated with myopia age at onset and Verhoeven et al. as significantly associated to mean spherical-equivalent (MSE refractive error. We observed two genome-wide significant associations with hyperopia. These regions overlapped with loci on 15q14 (minimum p value = 9.11×10(-11 and 8q12 (minimum p value 1.82×10(-11 previously reported for MSE and myopia age at onset. We also used an intermarker linkage- disequilibrium-based method for calculating the effective number of tests in targeted regional replication analyses. We analyzed myopia (which represents the closest phenotype in our data to the one used by Kiefer et al. and showed replication of 10 additional loci associated with myopia previously reported by Kiefer et al. This is the first replication of these loci using myopia as the trait under analysis. "Replication-level" association was also seen between hyperopia and 12 of Kiefer et al.'s published loci. For the loci that show evidence of association to both myopia and hyperopia, the estimated effect of the risk alleles were in opposite directions for the two traits. This suggests that these loci are important contributors to variation of
thermal degradation and estimation of dietary intakes of vitamin c
African Journals Online (AJOL)
IBRAHIM GARBA
ABSTRACT. Thermal degradation of vitamin C in eight different vegetables were determined. These comprised Onion,. Tomato, Red Pepper, Spinach, Okra, Green Beans, Cauliflower, and Cabbage. Maximum degradation was observed in Tomato with 83% loss while minimum loss of 37% was in Red Pepper. An estimate ...
26 CFR 5c.168(f)(8)-4 - Minimum investment of lessor.
2010-04-01
... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Minimum investment of lessor. 5c.168(f)(8)-4....168(f)(8)-4 Minimum investment of lessor. (a) Minimum investment. Under section 168(f)(8)(B)(ii), an... has a minimum at risk investment which, at the time the property is placed in service under the lease...
Correlation Dimension Estimates of Global and Local Temperature Data.
Wang, Qiang
1995-11-01
The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.
International Nuclear Information System (INIS)
Beer, M.
1980-01-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates
30 CFR 18.97 - Inspection of machines; minimum requirements.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inspection of machines; minimum requirements... TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Field Approval of Electrically Operated Mining Equipment § 18.97 Inspection of machines; minimum...
Minimum variance Monte Carlo importance sampling with parametric dependence
International Nuclear Information System (INIS)
Ragheb, M.M.H.; Halton, J.; Maynard, C.W.
1981-01-01
An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de
Elemental GCR Observations during the 2009-2010 Solar Minimum Period
Lave, K. A.; Israel, M. H.; Binns, W. R.; Christian, E. R.; Cummings, A. C.; Davis, A. J.; deNolfo, G. A.; Leske, R. A.; Mewaldt, R. A.; Stone, E. C.;
2013-01-01
Using observations from the Cosmic Ray Isotope Spectrometer (CRIS) onboard the Advanced Composition Explorer (ACE), we present new measurements of the galactic cosmic ray (GCR) elemental composition and energy spectra for the species B through Ni in the energy range approx. 50-550 MeV/nucleon during the record setting 2009-2010 solar minimum period. These data are compared with our observations from the 1997-1998 solar minimum period, when solar modulation in the heliosphere was somewhat higher. For these species, we find that the intensities during the 2009-2010 solar minimum were approx. 20% higher than those in the previous solar minimum, and in fact were the highest GCR intensities recorded during the space age. Relative abundances for these species during the two solar minimum periods differed by small but statistically significant amounts, which are attributed to the combination of spectral shape differences between primary and secondary GCRs in the interstellar medium and differences between the levels of solar modulation in the two solar minima. We also present the secondary-to-primary ratios B/C and (Sc+Ti+V)/Fe for both solar minimum periods, and demonstrate that these ratios are reasonably well fit by a simple "leaky-box" galactic transport model that is combined with a spherically symmetric solar modulation model.
Cokriging model for estimation of water table elevation
International Nuclear Information System (INIS)
Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.
1989-01-01
In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model
Application of the Evidence Procedure to the Estimation of Wireless Channels
Directory of Open Access Journals (Sweden)
Fleury Bernard H
2007-01-01
Full Text Available We address the application of the Bayesian evidence procedure to the estimation of wireless channels. The proposed scheme is based on relevance vector machines (RVM originally proposed by M. Tipping. RVMs allow to estimate channel parameters as well as to assess the number of multipath components constituting the channel within the Bayesian framework by locally maximizing the evidence integral. We show that, in the case of channel sounding using pulse-compression techniques, it is possible to cast the channel model as a general linear model, thus allowing RVM methods to be applied. We extend the original RVM algorithm to the multiple-observation/multiple-sensor scenario by proposing a new graphical model to represent multipath components. Through the analysis of the evidence procedure we develop a thresholding algorithm that is used in estimating the number of components. We also discuss the relationship of the evidence procedure to the standard minimum description length (MDL criterion. We show that the maximum of the evidence corresponds to the minimum of the MDL criterion. The applicability of the proposed scheme is demonstrated with synthetic as well as real-world channel measurements, and a performance increase over the conventional MDL criterion applied to maximum-likelihood estimates of the channel parameters is observed.
Minimum qualifications for nuclear criticality safety professionals
International Nuclear Information System (INIS)
Ketzlach, N.
1990-01-01
A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals
Verification of Minimum Detectable Activity for Radiological Threat Source Search
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
Heidari, Mehdi Haji; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-01-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signa...
Setting a minimum age for juvenile justice jurisdiction in California
Barnert, Elizabeth S.; Abrams, Laura S.; Maxson, Cheryl; Gase, Lauren; Soung, Patricia; Carroll, Paul; Bath, Eraka
2018-01-01
Purpose Despite the existence of minimum age laws for juvenile justice jurisdiction in 18 US states, California has no explicit law that protects children (i.e. youth less than 12 years old) from being processed in the juvenile justice system. In the absence of a minimum age law, California lags behind other states and international practice and standards. The paper aims to discuss these issues. Design/methodology/approach In this policy brief, academics across the University of California campuses examine current evidence, theory, and policy related to the minimum age of juvenile justice jurisdiction. Findings Existing evidence suggests that children lack the cognitive maturity to comprehend or benefit from formal juvenile justice processing, and diverting children from the system altogether is likely to be more beneficial for the child and for public safety. Research limitations/implications Based on current evidence and theory, the authors argue that minimum age legislation that protects children from contact with the juvenile justice system and treats them as children in need of services and support, rather than as delinquents or criminals, is an important policy goal for California and for other national and international jurisdictions lacking a minimum age law. Originality/value California has no law specifying a minimum age for juvenile justice jurisdiction, meaning that young children of any age can be processed in the juvenile justice system. This policy brief provides a rationale for a minimum age law in California and other states and jurisdictions without one. Paper type Conceptual paper PMID:28299968
Planetary tides during the Maunder sunspot minimum
International Nuclear Information System (INIS)
Smythe, C.M.; Eddy, J.A.
1977-01-01
Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test
Conklin, Annalijn I; Ponce, Ninez A; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24-49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity.
Estimating the effects of wages on obesity.
Kim, DaeHwan; Leigh, John Paul
2010-05-01
To estimate the effects of wages on obesity and body mass. Data on household heads, aged 20 to 65 years, with full-time jobs, were drawn from the Panel Study of Income Dynamics for 2003 to 2007. The Panel Study of Income Dynamics is a nationally representative sample. Instrumental variables (IV) for wages were created using knowledge of computer software and state legal minimum wages. Least squares (linear regression) with corrected standard errors were used to estimate the equations. Statistical tests revealed both instruments were strong and tests for over-identifying restrictions were favorable. Wages were found to be predictive (P low wages increase obesity prevalence and body mass.
29 CFR 510.22 - Industries eligible for minimum wage phase-in.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Industries eligible for minimum wage phase-in. 510.22... REGULATIONS IMPLEMENTATION OF THE MINIMUM WAGE PROVISIONS OF THE 1989 AMENDMENTS TO THE FAIR LABOR STANDARDS ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in...
Western Australian Public Opinions of a Minimum Pricing Policy for Alcohol: Study Protocol.
Keatley, David A; Carragher, Natacha; Chikritzhs, Tanya; Daube, Mike; Hardcastle, Sarah J; Hagger, Martin S
2015-11-18
Excessive alcohol consumption has significant adverse economic, social, and health outcomes. Recent estimates suggest that the annual economic costs of alcohol in Australia are up to AUD $36 billion. Policies influencing price have been demonstrated to be very effective in reducing alcohol consumption and alcohol-related harms. Interest in minimum pricing has gained traction in recent years. However, there has been little research investigating the level of support for the public interest case of minimum pricing in Australia. This article describes protocol for a study exploring Western Australian (WA) public knowledge, understanding, and reaction to a proposed minimum price policy per standard drink. The study will employ a qualitative methodological design. Participants will be recruited from a wide variety of backgrounds, including ethnic minorities, blue and white collar workers, unemployed, students, and elderly/retired populations to participate in focus groups. Focus group participants will be asked about their knowledge of, and initial reactions to, the proposed policy and encouraged to discuss how such a proposal may affect their own alcohol use and alcohol consumption at the population level. Participants will also be asked to discuss potential avenues for increasing acceptability of the policy. The focus groups will adopt a semi-structured, open-ended approach guided by a question schedule. The schedule will be based on feedback from pilot samples, previous research, and a steering group comprising experts in alcohol policy and pricing. The study is expected to take approximately 14 months to complete. The findings will be of considerable interest and relevance to government officials, policy makers, researchers, advocacy groups, alcohol retail and licensed establishments and organizations, city and town planners, police, and other stakeholder organizations.
estimation of global solar radiation from sunshine hours for warri
African Journals Online (AJOL)
DJFLEX
Multiple linear regression models were developed to estimate the monthly daily sunshine hours using four parameters during a period of eleven years (1997 to 2007) for Warri, Nigeria (Latitude of 5o. 34' 21.0''); the parameters include, Relative Humidity, Maximum and Minimum Temperature, Rainfall and Wind Speed.
Estimation of global solar radiation from sunshine hours for Warri ...
African Journals Online (AJOL)
Multiple linear regression models were developed to estimate the monthly daily sunshine hours using four parameters during a period of eleven years (1997 to 2007) for Warri, Nigeria (Latitude of 5o 34' 21.0''); the parameters include, Relative Humidity, Maximum and Minimum Temperature, Rainfall and Wind Speed.
13 CFR 107.830 - Minimum duration/term of financing.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Minimum duration/term of financing... INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.830 Minimum duration/term of financing. (a...
42 CFR 84.117 - Gas mask containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Gas mask containers; minimum requirements. 84.117... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Gas Masks § 84.117 Gas mask containers; minimum requirements. (a) Gas masks shall be equipped with a substantial...
International Nuclear Information System (INIS)
Kwee, Regina
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Do Minimum Wages in Latin America and the Caribbean Matter? Evidence from 19 Countries
DEFF Research Database (Denmark)
Kristensen, Nicolai; Cunningham, Wendy
of if and how minimum wages affect wage distributions in LAC countries. Although there is no single minimum wage institution in the LAC region, we find regional trends. Minimum wages affect the wage distribution in both the formal and, especially, the informal sector, both at the minimum wage and at multiples...... of the minimum. The minimum does not uniformly benefit low-wage workers: in countries where the minimum wage is relatively low compared to mean wages, the minimum wage affects the more disadvantaged segments of the labor force, namely informal sector workers, women, young and older workers, and the low skilled...
29 CFR 552.100 - Application of minimum wage and overtime provisions.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Application of minimum wage and overtime provisions. 552... § 552.100 Application of minimum wage and overtime provisions. (a)(1) Domestic service employees must receive for employment in any household a minimum wage of not less than that required by section 6(a) of...
International Nuclear Information System (INIS)
Herzog, Ulrike; Bergou, Janos A.
2004-01-01
We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is ambiguous discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i.e., error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum-error probability achievable in ambiguous discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case where the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison with the minimum failure probability
Minimum alcohol pricing policies in practice: A critical examination of implementation in Canada.
Thompson, Kara; Stockwell, Tim; Wettlaufer, Ashley; Giesbrecht, Norman; Thomas, Gerald
2017-02-01
There is an interest globally in using Minimum Unit Pricing (MUP) of alcohol to promote public health. Canada is the only country to have both implemented and evaluated some forms of minimum alcohol prices, albeit in ways that fall short of MUP. To inform these international debates, we describe the degree to which minimum alcohol prices in Canada meet recommended criteria for being an effective public health policy. We collected data on the implementation of minimum pricing with respect to (1) breadth of application, (2) indexation to inflation and (3) adjustments for alcohol content. Some jurisdictions have implemented recommended practices with respect to minimum prices; however, the full harm reduction potential of minimum pricing is not fully realised due to incomplete implementation. Key concerns include the following: (1) the exclusion of minimum prices for several beverage categories, (2) minimum prices below the recommended minima and (3) prices are not regularly adjusted for inflation or alcohol content. We provide recommendations for best practices when implementing minimum pricing policy.
19 CFR 144.33 - Minimum quantities to be withdrawn.
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Minimum quantities to be withdrawn. 144.33 Section 144.33 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... Warehouse § 144.33 Minimum quantities to be withdrawn. Unless by special authority of the Commissioner of...
Bayesian ensemble approach to error estimation of interatomic potentials
DEFF Research Database (Denmark)
Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.
2004-01-01
Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...
Directory of Open Access Journals (Sweden)
Alexander J Taylor
Full Text Available Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Quality control methods in accelerometer data processing: defining minimum wear time.
Directory of Open Access Journals (Sweden)
Carly Rich
Full Text Available BACKGROUND: When using accelerometers to measure physical activity, researchers need to determine whether subjects have worn their device for a sufficient period to be included in analyses. We propose a minimum wear criterion using population-based accelerometer data, and explore the influence of gender and the purposeful inclusion of children with weekend data on reliability. METHODS: Accelerometer data obtained during the age seven sweep of the UK Millennium Cohort Study were analysed. Children were asked to wear an ActiGraph GT1M accelerometer for seven days. Reliability coefficients(r of mean daily counts/minute were calculated using the Spearman-Brown formula based on the intraclass correlation coefficient. An r of 1.0 indicates that all the variation is between- rather than within-children and that measurement is 100% reliable. An r of 0.8 is often regarded as acceptable reliability. Analyses were repeated on data from children who met different minimum daily wear times (one to 10 hours and wear days (one to seven days. Analyses were conducted for all children, separately for boys and girls, and separately for children with and without weekend data. RESULTS: At least one hour of wear time data was obtained from 7,704 singletons. Reliability increased as the minimum number of days and the daily wear time increased. A high reliability (r = 0.86 and sample size (n = 6,528 was achieved when children with ≥ two days lasting ≥10 hours/day were included in analyses. Reliability coefficients were similar for both genders. Purposeful sampling of children with weekend data resulted in comparable reliabilities to those calculated independent of weekend wear. CONCLUSION: Quality control procedures should be undertaken before analysing accelerometer data in large-scale studies. Using data from children with ≥ two days lasting ≥10 hours/day should provide reliable estimates of physical activity. It's unnecessary to include only children
Directory of Open Access Journals (Sweden)
Antonio Ismael Inácio Cardoso
2006-01-01
Full Text Available O objetivo deste trabalho foi estimar o número mínimo de colheitas em experimento com a cultura do pepino. Foram analisados dados de um experimento onde avaliaram-se 14 híbridos de pepino japonês, no delineamento em blocos ao acaso, com quatro repetições e cinco plantas por parcela, conduzidas sob ambiente protegido. Foram efetuadas 32 colheitas, três colheitas por semana, por um período de 72 dias e estimados os coeficientes de repetibilidade, com base na análise de componentes principais para os caracteres produção de frutos por planta, em massa e número, total e comercial. Os resultados permitiram concluir que menos de nove colheitas foram suficientes para analisar as diferenças de produtividade entre os diferentes híbridos, com 95% de certeza de serem eleitos os mais produtivos.The objective of this work was to estimate the minimum harvest number in experiments with cucumber. Data of an experiment comparing 14 japanese cucumber hybrids in the randomized block design, with four replications and five plants per plot, under protected cultivation, were analyzed. Thirty-two harvestings were made, three harvests a week, during 72 days to estimate repeatability coefficient based on analysis of the principal components for number and weight of total and commercial fruits per plant. Results allowed to conclude that less than nine harvestings were enough to verify yield differences among hybrids, with 95% of probability to identify the highest yielding ones.
Estimating solar irradiation in the Arctic
Directory of Open Access Journals (Sweden)
Babar Bilal
2016-01-01
Full Text Available Solar radiation data plays an important role in pre-feasibility studies of solar electricity and/or thermal system installations. Measured solar radiation data is scarcely available due to the high cost of installing and maintaining high quality solar radiation sensors (pyranometers. Indirect measured radiation data received from geostationary satellites is unreliable at latitudes above 60 degrees due to the resulting flat viewing angle. In this paper, an empirical method to estimate solar radiation based on minimum climatological data is proposed. Eight sites in Norway are investigated, all of which lie above 60 N. The estimations by the model are compared to the ground measured values and a correlation coefficient of 0.88 was found while over all percentage error was −1.1%. The proposed models is 0.2% efficient on diurnal and 10.8% better in annual estimations than previous models.
Estimation of in-vivo pulses in medical ultrasound
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
1994-01-01
and the three-dimensional, attenuated ultrasound field for a concave transducer. Pulses are estimated from in-vivo liver data showing good resemblance to a pulse measured as the response from a planar reflector and then properly attenuated. The main application for the algorithm is to function......An algorithm for the estimation of one-dimensional in-vivo ultrasound pulses is derived. The routine estimates a set of ARMA parameters describing the pulse and uses data from a number of adjacent rf lines. Using multiple lines results in a decrease in variance on the estimated parameters...... and significantly reduces the risk of terminating the algorithm at a local minimum. Examples from use on synthetic data confirms the reduction in variance and increased chance of successful minimization termination. Simulations are also reported indicating the relation between the one-dimensional pulse...
A Minimum Spanning Tree Representation of Anime Similarities
Wibowo, Canggih Puspo
2016-01-01
In this work, a new way to represent Japanese animation (anime) is presented. We applied a minimum spanning tree to show the relation between anime. The distance between anime is calculated through three similarity measurements, namely crew, score histogram, and topic similarities. Finally, the centralities are also computed to reveal the most significance anime. The result shows that the minimum spanning tree can be used to determine the similarity anime. Furthermore, by using centralities c...
Start of Eta Car's X-ray Minimum
Corcoran, Michael F.; Liburd, Jamar; Hamaguchi, Kenji; Gull, Theodore; Madura, Thomas; Teodoro, Mairan; Moffat, Anthony; Richardson, Noel; Russell, Chris; Pollock, Andrew;
2014-01-01
Analysis of Eta Car's X-ray spectrum in the 2-10 keV band using quicklook data from the XRay Telescope on Swift shows that the flux on July 30, 2014 was 4.9 plus or minus 2.0×10(exp-12) ergs s(exp-1)cm(exp-2). This flux is nearly equal to the X-ray minimum flux seen by RXTE in 2009, 2003.5, and 1998, and indicates that Eta Car has reached its X-ray minimum, as expected based on the 2024-day period derived from previous 2-10 keV observations with RXTE.
The Impact Of Minimum Wage On Employment Level And ...
African Journals Online (AJOL)
This research work has been carried out to analyze the critical impact of minimum wage of employment level and productivity in Nigeria. A brief literature on wage and its determination was highlighted. Models on minimum wage effect are being look into. This includes research work done by different economist analyzing it ...
30 CFR 77.606-1 - Rubber gloves; minimum requirements.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Rubber gloves; minimum requirements. 77.606-1... COAL MINES Trailing Cables § 77.606-1 Rubber gloves; minimum requirements. (a) Rubber gloves (lineman's gloves) worn while handling high-voltage trailing cables shall be rated at least 20,000 volts and shall...
Do minimum wages reduce poverty? Evidence from Central America ...
International Development Research Centre (IDRC) Digital Library (Canada)
In all three countries, these multiple minimum wages are negotiated among representatives of the central government, labour unions and the chambers of commerce. Minimum wage legislation applies to all private-sector employees, but in all three countries a large part of the work force is self-employed or works as unpaid ...
The Minimum Wage, Restaurant Prices, and Labor Market Structure
Aaronson, Daniel; French, Eric; MacDonald, James
2008-01-01
Using store-level and aggregated Consumer Price Index data, we show that restaurant prices rise in response to minimum wage increases under several sources of identifying variation. We introduce a general model of employment determination that implies minimum wage hikes cause prices to rise in competitive labor markets but potentially fall in…
Minenkov, Yury; Chermak, Edrisse; Cavallo, Luigi
2015-01-01
The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.
Minenkov, Yury
2015-08-27
The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.
Channel Estimation on the (EW RLS Algorithm Model of MIMO OFDM in Wireless Communication
Directory of Open Access Journals (Sweden)
Sarnin Suzi Seroja
2016-01-01
(correspond to different mobility speeds and Monte Carlo simulations are performed and the MSE and BER performance versus SNR are obtained by averaging over 10000 channel realization. For comparisons, the BER performance is also presented for perfectly known channel at the receiver. In all the simulations, perfect synchronization between the transmitter and the receiver is assumed.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
Energy Technology Data Exchange (ETDEWEB)
Romero, Louis A; Mason, John J.
2018-04-01
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, the problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.
Construction of Protograph LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Quantitative Research on the Minimum Wage
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Application of Minimum-time Optimal Control System in Buck-Boost Bi-linear Converters
Directory of Open Access Journals (Sweden)
S. M. M. Shariatmadar
2017-08-01
Full Text Available In this study, the theory of minimum-time optimal control system in buck-boost bi-linear converters is described, so that output voltage regulation is carried out within minimum time. For this purpose, the Pontryagin's Minimum Principle is applied to find optimal switching level applying minimum-time optimal control rules. The results revealed that by utilizing an optimal switching level instead of classical switching patterns, output voltage regulation will be carried out within minimum time. However, transient energy index of increased overvoltage significantly reduces in order to attain minimum time optimal control in reduced output load. The laboratory results were used in order to verify numerical simulations.
Genetic analysis of scats reveals minimum number and sex of recently documented mountain lions
Naidu, Ashwin; Smythe, Lindsay A.; Thompson, Ron W.; Culver, Melanie
2011-01-01
Recent records of mountain lions Puma concolor and concurrent declines in desert bighorn sheep Ovis canadensis mexicana on Kofa National Wildlife Refuge in Arizona, United States, have prompted investigations to estimate the number of mountain lions occurring there. We performed noninvasive genetic analyses and identified species, individuals, and sex from scat samples collected from the Kofa and Castle Dome Mountains. From 105 scats collected, we identified a minimum of 11 individual mountain lions. These individuals consisted of six males, two females and three of unknown sex. Three of the 11 mountain lions were identified multiple times over the study period. These estimates supplement previously recorded information on mountain lions in an area where they were historically considered only transient. We demonstrate that noninvasive genetic techniques, especially when used in conjunction with camera-trap and radiocollaring methods, can provide additional and reliable information to wildlife managers, particularly on secretive species like the mountain lion.