WorldWideScience

Sample records for test-analysis model correlation

  1. Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results

    Science.gov (United States)

    Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul

    1992-01-01

    The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.

  2. Correlation analysis for forced vibration test of the Hualien large scale seismic test (LSST) program

    International Nuclear Information System (INIS)

    Sugawara, Y.; Sugiyama, T.; Kobayashi, T.; Yamaya, H.; Kitamura, E.

    1995-01-01

    The correlation analysis for a forced vibration test of a 1/4-scale containment SSI test model constructed in Hualien, Taiwan was carried out for the case of after backfilling. Prior to this correlation analysis, the structural properties were revised to adjust the calculated fundamental frequency in the fixed base condition to that derived from the test results. A correlation analysis was carried out using the Lattice Model which was able to estimate the soil-structure effects with embedment. The analysis results coincide well with test results and it is concluded that the mathematical soil-structure interaction model established by the correlation analysis is efficient in estimating the dynamic soil-structure interaction effect with embedment. This mathematical model will be applied as a basic model for simulation analysis of earthquake observation records. (author). 3 refs., 12 figs., 2 tabs

  3. International Space Station Model Correlation Analysis

    Science.gov (United States)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  4. Thermal Testing and Model Correlation for Advanced Topographic Laser Altimeter Instrument (ATLAS)

    Science.gov (United States)

    Patel, Deepak

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) part of the Ice Cloud and Land Elevation Satellite 2 (ICESat-2) is an upcoming Earth Science mission focusing on the effects of climate change. The flight instrument passed all environmental testing at GSFC (Goddard Space Flight Center) and is now ready to be shipped to the spacecraft vendor for integration and testing. This topic covers the analysis leading up to the test setup for ATLAS thermal testing as well as model correlation to flight predictions. Test setup analysis section will include areas where ATLAS could not meet flight like conditions and what were the limitations. Model correlation section will walk through changes that had to be made to the thermal model in order to match test results. The correlated model will then be integrated with spacecraft model for on-orbit predictions.

  5. Thermal Testing and Model Correlation of the Magnetospheric Multiscale (MMS) Observatories

    Science.gov (United States)

    Kim, Jong S.; Teti, Nicholas M.

    2015-01-01

    The Magnetospheric Multiscale (MMS) mission is a Solar Terrestrial Probes mission comprising four identically instrumented spacecraft that will use Earth's magnetosphere as a laboratory to study the microphysics of three fundamental plasma processes: magnetic reconnection, energetic particle acceleration, and turbulence. This paper presents the complete thermal balance (TB) test performed on the first of four observatories to go through thermal vacuum (TV) and the minibalance testing that was performed on the subsequent observatories to provide a comparison of all four. The TV and TB tests were conducted in a thermal vacuum chamber at the Naval Research Laboratory (NRL) in Washington, D.C. with the vacuum level higher than 1.3 x 10 (sup -4) pascals (10 (sup -6) torr) and the surrounding temperature achieving -180 degrees Centigrade. Three TB test cases were performed that included hot operational science, cold operational science and a cold survival case. In addition to the three balance cases a two hour eclipse and a four hour eclipse simulation was performed during the TV test to provide additional transient data points that represent the orbit in eclipse (or Earth's shadow) The goal was to perform testing such that the flight orbital environments could be simulated as closely as possible. A thermal model correlation between the thermal analysis and the test results was completed. Over 400 1-Wire temperature sensors, 200 thermocouples and 125 flight thermistor temperature sensors recorded data during TV and TB testing. These temperature versus time profiles and their agreements with the analytical results obtained using Thermal Desktop and SINDA/FLUINT are discussed. The model correlation for the thermal mathematical model (TMM) is conducted based on the numerical analysis results and the test data. The philosophy of model correlation was to correlate the model to within 3 degrees Centigrade of the test data using the standard deviation and mean deviation error

  6. Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation

    Directory of Open Access Journals (Sweden)

    Badi H. Baltagi

    2016-11-01

    Full Text Available This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as N , T → ∞ . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present.

  7. Correlation Results for a Mass Loaded Vehicle Panel Test Article Finite Element Models and Modal Survey Tests

    Science.gov (United States)

    Maasha, Rumaasha; Towner, Robert L.

    2012-01-01

    High-fidelity Finite Element Models (FEMs) were developed to support a recent test program at Marshall Space Flight Center (MSFC). The FEMs correspond to test articles used for a series of acoustic tests. Modal survey tests were used to validate the FEMs for five acoustic tests (a bare panel and four different mass-loaded panel configurations). An additional modal survey test was performed on the empty test fixture (orthogrid panel mounting fixture, between the reverb and anechoic chambers). Modal survey tests were used to test-validate the dynamic characteristics of FEMs used for acoustic test excitation. Modal survey testing and subsequent model correlation has validated the natural frequencies and mode shapes of the FEMs. The modal survey test results provide a basis for the analysis models used for acoustic loading response test and analysis comparisons

  8. Hall Thruster Thermal Modeling and Test Data Correlation

    Science.gov (United States)

    Myers, James; Kamhawi, Hani; Yim, John; Clayman, Lauren

    2016-01-01

    The life of Hall Effect thrusters are primarily limited by plasma erosion and thermal related failures. NASA Glenn Research Center (GRC) in cooperation with the Jet Propulsion Laboratory (JPL) have recently completed development of a Hall thruster with specific emphasis to mitigate these limitations. Extending the operational life of Hall thursters makes them more suitable for some of NASA's longer duration interplanetary missions. This paper documents the thermal model development, refinement and correlation of results with thruster test data. Correlation was achieved by minimizing uncertainties in model input and recognizing the relevant parameters for effective model tuning. Throughout the thruster design phase the model was used to evaluate design options and systematically reduce component temperatures. Hall thrusters are inherently complex assemblies of high temperature components relying on internal conduction and external radiation for heat dispersion and rejection. System solutions are necessary in most cases to fully assess the benefits and/or consequences of any potential design change. Thermal model correlation is critical since thruster operational parameters can push some components/materials beyond their temperature limits. This thruster incorporates a state-of-the-art magnetic shielding system to reduce plasma erosion and to a lesser extend power/heat deposition. Additionally a comprehensive thermal design strategy was employed to reduce temperatures of critical thruster components (primarily the magnet coils and the discharge channel). Long term wear testing is currently underway to assess the effectiveness of these systems and consequently thruster longevity.

  9. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  10. Modal Analysis and Model Correlation of the Mir Space Station

    Science.gov (United States)

    Kim, Hyoung M.; Kaouk, Mohamed

    2000-01-01

    This paper will discuss on-orbit dynamic tests, modal analysis, and model refinement studies performed as part of the Mir Structural Dynamics Experiment (MiSDE). Mir is the Russian permanently manned Space Station whose construction first started in 1986. The MiSDE was sponsored by the NASA International Space Station (ISS) Phase 1 Office and was part of the Shuttle-Mir Risk Mitigation Experiment (RME). One of the main objectives for MiSDE is to demonstrate the feasibility of performing on-orbit modal testing on large space structures to extract modal parameters that will be used to correlate mathematical models. The experiment was performed over a one-year span on the Mir-alone and Mir with a Shuttle docked. A total of 45 test sessions were performed including: Shuttle and Mir thruster firings, Shuttle-Mir and Progress-Mir dockings, crew exercise and pushoffs, and ambient noise during night-to-day and day-to-night orbital transitions. Test data were recorded with a variety of existing and new instrumentation systems that included: the MiSDE Mir Auxiliary Sensor Unit (MASU), the Space Acceleration Measurement System (SAMS), the Russian Mir Structural Dynamic Measurement System (SDMS), the Mir and Shuttle Inertial Measurement Units (IMUs), and the Shuttle payload bay video cameras. Modal analysis was performed on the collected test data to extract modal parameters, i.e. frequencies, damping factors, and mode shapes. A special time-domain modal identification procedure was used on free-decay structural responses. The results from this study show that modal testing and analysis of large space structures is feasible within operational constraints. Model refinements were performed on both the Mir alone and the Shuttle-Mir mated configurations. The design sensitivity approach was used for refinement, which adjusts structural properties in order to match analytical and test modal parameters. To verify the refinement results, the analytical responses calculated using

  11. Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article

    Science.gov (United States)

    Gupta, Anju

    2013-01-01

    This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.

  12. Numerical thermal mathematical model correlation to thermal balance test using adaptive particle swarm optimization (APSO)

    International Nuclear Information System (INIS)

    Beck, T.; Bieler, A.; Thomas, N.

    2012-01-01

    We present structural and thermal model (STM) tests of the BepiColombo laser altimeter (BELA) receiver baffle with emphasis on the correlation of the data with a thermal mathematical model. The test unit is a part of the thermal and optical protection of the BELA instrument being tested under infrared and solar irradiation at University of Bern. An iterative optimization method known as particle swarm optimization has been adapted to adjust the model parameters, mainly the linear conductivity, in such a way that model and test results match. The thermal model reproduces the thermal tests to an accuracy of 4.2 °C ± 3.2 °C in a temperature range of 200 °C after using only 600 iteration steps of the correlation algorithm. The use of this method brings major benefits to the accuracy of the results as well as to the computational time required for the correlation. - Highlights: ► We present model correlations of the BELA receiver baffle to thermal balance tests. ► Adaptive particle swarm optimization has been adapted for the correlation. ► The method improves the accuracy of the correlation and the computational time.

  13. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  14. Six-Tube Freezable Radiator Testing and Model Correlation

    Science.gov (United States)

    Lilibridge, Sean T.; Navarro, Moses

    2012-01-01

    Freezable Radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft?s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recov ering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TM) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested: MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.

  15. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  16. A default Bayesian hypothesis test for correlations and partial correlations

    NARCIS (Netherlands)

    Wetzels, R.; Wagenmakers, E.J.

    2012-01-01

    We propose a default Bayesian hypothesis test for the presence of a correlation or a partial correlation. The test is a direct application of Bayesian techniques for variable selection in regression models. The test is easy to apply and yields practical advantages that the standard frequentist tests

  17. Correlation of Descriptive Analysis and Instrumental Puncture Testing of Watermelon Cultivars.

    Science.gov (United States)

    Shiu, J W; Slaughter, D C; Boyden, L E; Barrett, D M

    2016-06-01

    The textural properties of 5 seedless watermelon cultivars were assessed by descriptive analysis and the standard puncture test using a hollow probe with increased shearing properties. The use of descriptive analysis methodology was an effective means of quantifying watermelon sensory texture profiles for characterizing specific cultivars' characteristics. Of the 10 cultivars screened, 71% of the variation in the sensory attributes was measured using the 1st 2 principal components. Pairwise correlation of the hollow puncture probe and sensory parameters determined that initial slope, maximum force, and work after maximum force measurements all correlated well to the sensory attributes crisp and firm. These findings confirm that maximum force correlates well with not only firmness in watermelon, but crispness as well. The initial slope parameter also captures the sensory crispness of watermelon, but is not as practical to measure in the field as maximum force. The work after maximum force parameter is thought to reflect cellular arrangement and membrane integrity that in turn impact sensory firmness and crispness. Watermelon cultivar types were correctly predicted by puncture test measurements in heart tissue 87% of the time, although descriptive analysis was correct 54% of the time. © 2016 Institute of Food Technologists®

  18. A Correlated Study of the Response of a Satellite to Acoustic Radiation Using Statistical Energy Analysis and Acoustic Test Data

    International Nuclear Information System (INIS)

    CAP, JEROME S.; TRACEY, BRIAN

    1999-01-01

    Aerospace payloads, such as satellites, are subjected to vibroacoustic excitation during launch. Sandia's MTI satellite has recently been certified to this environment using a combination of base input random vibration and reverberant acoustic noise. The initial choices for the acoustic and random vibration test specifications were obtained from the launch vehicle Interface Control Document (ICD). In order to tailor the random vibration levels for the laboratory certification testing, it was necessary to determine whether vibration energy was flowing across the launch vehicle interface from the satellite to the launch vehicle or the other direction. For frequencies below 120 Hz this issue was addressed using response limiting techniques based on results from the Coupled Loads Analysis (CLA). However, since the CLA Finite Element Analysis FEA model was only correlated for frequencies below 120 Hz, Statistical Energy Analysis (SEA) was considered to be a better choice for predicting the direction of the energy flow for frequencies above 120 Hz. The existing SEA model of the launch vehicle had been developed using the VibroAcoustic Payload Environment Prediction System (VAPEPS) computer code[1]. Therefore, the satellite would have to be modeled using VAPEPS as well. As is the case for any computational model, the confidence in its predictive capability increases if one can correlate a sample prediction against experimental data. Fortunately, Sandia had the ideal data set for correlating an SEA model of the MTI satellite--the measured response of a realistic assembly to a reverberant acoustic test that was performed during MTI's qualification test series. The first part of this paper will briefly describe the VAPEPS modeling effort and present the results of the correlation study for the VAPEPS model. The second part of this paper will present the results from a study that used a commercial SEA software package[2] to study the effects of in-plane modes and to evaluate

  19. Inference and testing on the boundary in extended constant conditional correlation GARCH models

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    2017-01-01

    We consider inference and testing in extended constant conditional correlation GARCH models in the case where the true parameter vector is a boundary point of the parameter space. This is of particular importance when testing for volatility spillovers in the model. The large-sample properties...

  20. Safe affordable fission engine (SAFE 30) module conductivity test thermal model correlation

    International Nuclear Information System (INIS)

    Roman, Jose

    2001-01-01

    The SAFE 30 is a simple, robust space fission power system that is comprised of several independent modules. Each module contains 4 fuel tubes bonded to a central heatpipe. Fission energy is conducted from the fuel tubes to the heatpipe, which in turn transfers the energy to a power conversion system. This paper benchmarks a thermal model of the SAFE 30 with actual test data from simulated SAFE 30 module tests. Two 'dummy' SAFE 30 modules were fabricated - each consisted of 4 1-inch dia. tubes (simulating the fuel tubes) bonded to a central '1' dia. tube (simulating the heatpipe). In the first module the fuel tubes were simply brazed to the heatpipe along the line of contact (leaving void space in the interstices), and in the second module the tubes and heatpipe were brazed via tri-cusps that completely fill the interstices between the tubes. In these tests, fission energy is simulated by placing resistance heaters within each of the 4 fuel tubes. The tests were conducted in a vacuum chamber in 4 configurations: tri-cusps filled with and without an outer insulation wrap, and no tri-cusps with and without an outer insulation wrap. The baseline SAFE 30 configuration uses the brazed tri-cusps. During the tests, the power applied to the heaters was varied in a stepwise fashion, until a steady-state temperature profile was reached. These temperature levels varied between 773 K and 1073 K. To benchmark the thermal model, the input energy and chamber surface temperature were used as boundary conditions for the model. The analytical results from the nodes at the same location as the test thermocouples were plotted again test data to determinate the accuracy of the analysis. The unknown variables on the analysis are the radiation emissivity of the pipe and chamber and the radiation view factor between the module and the chamber. A correlation was determined using a parametric analysis by varying the surface emissivity and view factor until a good match was reached. This

  1. Model-independent analysis with BPM correlation matrices

    International Nuclear Information System (INIS)

    Irwin, J.; Wang, C.X.; Yan, Y.T.; Bane, K.; Cai, Y.; Decker, F.; Minty, M.; Stupakov, G.; Zimmermann, F.

    1998-06-01

    The authors discuss techniques for Model-Independent Analysis (MIA) of a beamline using correlation matrices of physical variables and Singular Value Decomposition (SVD) of a beamline BPM matrix. The beamline matrix is formed from BPM readings for a large number of pulses. The method has been applied to the Linear Accelerator of the SLAC Linear Collider (SLC)

  2. Testing a Model of Planck-Scale Quantum Geometry With Broadband Correlation of Colocated 40m Interferometers

    International Nuclear Information System (INIS)

    McCuller, Lee Patrick

    2015-01-01

    The Holometer is designed to test for a Planck diffractive-scaling uncertainty in long-baseline position measurements due to an underlying noncommutative geometry normalized to relate Black hole entropy bounds of the Holographic principle to the now-finite number of position states. The experiment overlaps two independent 40 meter optical Michelson interferometers to detect the proposed uncertainty as a common broadband length fluctuation. 150 hours of instrument cross-correlation data are analyzed to test the prediction of a correlated noise magnitude of 7·10 -21 m/√Hz with an effective bandwidth of 750kHz. The interferometers each have a quantum-limited sensitivity of 2.5·10 -18 m/√Hz, but their correlation with a time-bandwidth product of 4·10 11 digs between the noise floors in search for the covarying geometric jitter. The data presents an exclusion of 5 standard deviations for the tested model. This exclusion is defended through analysis of the calibration methods for the instrument as well as further sub shot noise characterization of the optical systems to limit spurious background-correlations from undermining the signal.

  3. Testing a Model of Planck-Scale Quantum Geometry With Broadband Correlation of Colocated 40m Interferometers

    Energy Technology Data Exchange (ETDEWEB)

    McCuller, Lee Patrick [Univ. of Chicago, IL (United States)

    2015-12-01

    The Holometer is designed to test for a Planck diffractive-scaling uncertainty in long-baseline position measurements due to an underlying noncommutative geometry normalized to relate Black hole entropy bounds of the Holographic principle to the now-finite number of position states. The experiment overlaps two independent 40 meter optical Michelson interferometers to detect the proposed uncertainty as a common broadband length fluctuation. 150 hours of instrument cross-correlation data are analyzed to test the prediction of a correlated noise magnitude of $7\\times10^{−21}$ m/$\\sqrt{\\rm Hz}$ with an effective bandwidth of 750kHz. The interferometers each have a quantum-limited sensitivity of $2.5\\times 10^{−18}$ m/$\\sqrt{\\rm Hz}$, but their correlation with a time-bandwidth product of $4\\times 10^{11}$ digs between the noise floors in search for the covarying geometric jitter. The data presents an exclusion of 5 standard deviations for the tested model. This exclusion is defended through analysis of the calibration methods for the instrument as well as further sub shot noise characterization of the optical systems to limit spurious background-correlations from undermining the signal.

  4. A study on stress analysis of small punch-creep test and its experimental correlations with uniaxial-creep test

    International Nuclear Information System (INIS)

    Lee, Song In; Baek, Seoung Se; Kwon, Il Hyun; Yu, Hyo Sun

    2002-01-01

    A basic research was performed to ensure the usefulness of Small Punch-creep(SP-creep) test for residual life evaluation of heat resistant components effectively. This paper presents analytical results of initial stress and strain distributions in SP specimen caused by constant loading for SP-creep test and its experimental correlations with uniaxial creep(Ten-creep) test on 9CrlMoVNb steel. It was shown that the initial maximum equivalent stress, σ eq · max from FE analysis was correlated with steady-state equivalent creep strain rate, ε qf-ss , rupture time, t r , activation energy, Q and Larson-Miller parameter, LMP during SP-creep deformation. The simple correlation laws, σ SP - σ TEN , P SP -σ TEN and Q SP -Q TEN adopted to established a quantitative correlation between SP-creep and Ten-creep test data. Especially, the activation energy obtained from SP-creep test is linearly related to that from Ten-creep test at 650 deg. C as follows : Q SP-P =1.37 Q TEN , Q SP-σ =1.53 Q TEN

  5. Oblique penetration modeling and correlation with field tests into a soil target

    Energy Technology Data Exchange (ETDEWEB)

    Longcope, D.B. Jr. [Sandia National Labs., Albuquerque, NM (United States). Structural Dynamics Dept.

    1996-09-01

    An oblique penetration modeling procedure is evaluated by correlation with onboard acceleration data from a series of six penetration tests into Antelope Dry Lake soil at Tonopah Test Range, Nevada. The modeling represents both the loading which is coupled to the penetrator bending and the penetrator structure including connections between the major subsections. Model results show reasonable agreement with the data which validates the modeling procedure within a modest uncertainty related to accelerometer clipping and rattling of the telemetry package. The experimental and analytical results provide design guidance for the location and lateral restraint of components to reduce their shock environment.

  6. Correlation analysis between pulmonary function test parameters and CT image parameters of emphysema

    Science.gov (United States)

    Liu, Cheng-Pei; Li, Chia-Chen; Yu, Chong-Jen; Chang, Yeun-Chung; Wang, Cheng-Yi; Yu, Wen-Kuang; Chen, Chung-Ming

    2016-03-01

    Conventionally, diagnosis and severity classification of Chronic Obstructive Pulmonary Disease (COPD) are usually based on the pulmonary function tests (PFTs). To reduce the need of PFT for the diagnosis of COPD, this paper proposes a correlation model between the lung CT images and the crucial index of the PFT, FEV1/FVC, a severity index of COPD distinguishing a normal subject from a COPD patient. A new lung CT image index, Mirage Index (MI), has been developed to describe the severity of COPD primarily with emphysema disease. Unlike conventional Pixel Index (PI) which takes into account all voxels with HU values less than -950, the proposed approach modeled these voxels by different sizes of bullae balls and defines MI as a weighted sum of the percentages of the bullae balls of different size classes and locations in a lung. For evaluation of the efficacy of the proposed model, 45 emphysema subjects of different severity were involved in this study. In comparison with the conventional index, PI, the correlation between MI and FEV1/FVC is -0.75+/-0.08, which substantially outperforms the correlation between PI and FEV1/FVC, i.e., -0.63+/-0.11. Moreover, we have shown that the emphysematous lesion areas constituted by small bullae balls are basically irrelevant to FEV1/FVC. The statistical analysis and special case study results show that MI can offer better assessment in different analyses.

  7. Correlation of analysis with high level vibration test results for primary coolant piping

    International Nuclear Information System (INIS)

    Park, Y.J.; Hofmayer, C.H.; Costello, J.F.

    1992-01-01

    Dynamic tests on a modified 1/2.5-scale model of pressurized water reactor (PWR) primary coolant piping were performed using a large shaking table at Tadotsu, Japan. The High Level Vibration Test (HLVT) program was part of a cooperative study between the United States (Nuclear Regulatory Commission/Brookhaven National Laboratory, NRC/BNL) and Japan (Ministry of International Trade and Industry/Nuclear Power Engineering Center). During the test program, the excitation level of each test run was gradually increased up to the limit of the shaking table and significant plastic strains, as well as cracking, were induced in the piping. To fully utilize the test results, NRC/BNL sponsored a project to develop corresponding analytical predictions for the nonlinear dynamic response of the piping for selected test runs. The analyses were performed using both simplified and detailed approaches. The simplified approaches utilize a linear solution and an approximate formulation for nonlinear dynamic effects such as the use of a deamplification factor. The detailed analyses were performed using available nonlinear finite element computer codes, including the MARC, ABAQUS, ADINA and WECAN codes. A comparison of various analysis techniques with the test results shows a higher prediction error in the detailed strain values in the overall response values. A summary of the correlation analyses was presented before the BNL. This paper presents a detailed description of the various analysis results and additional comparisons with test results

  8. Analysis of UPTF downcomer tests with the Cathare multi-dimensional model

    International Nuclear Information System (INIS)

    Dor, I.

    1993-01-01

    This paper presents the analysis and the modelling - with the system code CATHARE - of UPTF downcomer refill tests simulating the refill phase of a large break LOCA. The modelling approach in a system code is discussed. First the reasons why in this particular case available flooding correlations are difficult to use in system code are developed. Then the use of a 1 - D modelling of the downcomer with specific closure relations for the annular geometry is examined. But UPTF 1:1 scale tests and CREARE reduced scale tests point out some weaknesses of this modelling due to the particular multi-dimensional nature of the flow in the upper part of the downcomer. Thus a 2-D model is elaborated and implemented into CATHARE version 1.3e code. The assessment of the model is based on UPTF 1:1 scale tests (saturated and subcooled conditions). Discretization and meshing influence are investigated. On the basis of saturated tests a new discretization is proposed for different terms of the momentum balance equations (interfacial friction, momentum transport terms) which results in a significant improvement. Sensitivity studies performed on subcooled tests show that the water downflow predictions are improved by increasing the condensation in the downcomer. (author). 8 figs., 5 tabs., 9 refs., 2 appendix

  9. Modeling conditional correlations of asset returns

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    2015-01-01

    In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM-test is d......In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM......-test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five...

  10. A Summary of Interfacial Heat Transfer Models and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Cho, Hyung Kyu; Lee, Young Jin; Kim, Hee Chul; Jung, Young Jong; Kim, K. D. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2007-10-15

    A long term project has been launched in October 2006 to develop a plant safety analysis code. 5 organizations are joining together for the harmonious coworking to build up the code. In this project, KAERI takes the charge of the building up the physical models and correlations about the transport phenomena. The momentum and energy transfer terms as well as the mass are surveyed from the RELAP5/MOD3, RELAP5-3D, CATHARE, and TRAC-M does. Also the recent papers are surveyed. Among these resources, most of the CATHARE models are based on their own experiment and test results. Thus, the CATHARE models are only used as the comparison purposes. In this paper, a summary of the models and the correlations about the interfacial heat transfer are represented. These surveyed models and correlations will be tested numerically and one correlation is selected finally.

  11. Preliminary Test for Constitutive Models of CAP

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  12. Preliminary Test for Constitutive Models of CAP

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  13. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  14. Analysis and Test Correlation of Proof of Concept Box for Blended Wing Body-Low Speed Vehicle

    Science.gov (United States)

    Spellman, Regina L.

    2003-01-01

    process, the failure prediction was used to help gain acceptance and confidence in this new tool. The correlated models and process were to be used to analyze the full BWB-LSV airframe design. The analysis and correlation with test results of the proof of concept box is presented here, including the comparison of the Nastran and Hypersizer results.

  15. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  16. Study of relationship between MUF correlation and detection sensitivity of statistical analysis

    International Nuclear Information System (INIS)

    Tamura, Toshiaki; Ihara, Hitoshi; Yamamoto, Yoichi; Ikawa, Koji

    1989-11-01

    Various kinds of statistical analysis are proposed to NRTA (Near Real Time Materials Accountancy) which was devised to satisfy the timeliness goal of one of the detection goals of IAEA. It will be presumed that different statistical analysis results will occur between the case of considered rigorous error propagation (with MUF correlation) and the case of simplified error propagation (without MUF correlation). Therefore, measurement simulation and decision analysis were done using flow simulation of 800 MTHM/Y model reprocessing plant, and relationship between MUF correlation and detection sensitivity and false alarm of statistical analysis was studied. Specific character of material accountancy for 800 MTHM/Y model reprocessing plant was grasped by this simulation. It also became clear that MUF correlation decreases not only false alarm but also detection probability for protracted loss in case of CUMUF test and Page's test applied to NRTA. (author)

  17. Thermal Vacuum Test Correlation of a Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytical Model

    Science.gov (United States)

    Mckim, Stephen A.

    2016-01-01

    This thesis describes the development and correlation of a thermal model that forms the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented are presented. The thermal model was correlated to within plus or minus 3 degrees Celsius of the thermal vacuum test data, and was determined sufficient to make future propellant predictions on MMS. The model was also found to be relatively sensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed to improve temperature predictions in the upper hemisphere of the propellant tank where predictions were found to be 2 to 2.5 C lower than the test data. A road map for applying the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  18. Thermal Vacuum Test Correlation of A Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytics Model

    Science.gov (United States)

    McKim, Stephen A.

    2016-01-01

    This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  19. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  20. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  1. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  2. Test and Analysis Correlation of Form Impact onto Space Shuttle Wing Leading Edge RCC Panel 8

    Science.gov (United States)

    Fasanella, Edwin L.; Lyle, Karen H.; Gabrys, Jonathan; Melis, Matthew; Carney, Kelly

    2004-01-01

    Soon after the Columbia Accident Investigation Board (CAIB) began their study of the space shuttle Columbia accident, "physics-based" analyses using LS-DYNA were applied to characterize the expected damage to the Reinforced Carbon-Carbon (RCC) leading edge from high-speed foam impacts. Forensic evidence quickly led CAIB investigators to concentrate on the left wing leading edge RCC panels. This paper will concentrate on the test of the left-wing RCC panel 8 conducted at Southwest Research Institute (SwRI) and the correlation with an LS-DYNA analysis. The successful correlation of the LS-DYNA model has resulted in the use of LS-DYNA as a predictive tool for characterizing the threshold of damage for impacts of various debris such as foam, ice, and ablators onto the RCC leading edge for shuttle return-to-flight.

  3. Groundwater travel time uncertainty analysis. Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1985-03-01

    This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis

  4. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    Energy Technology Data Exchange (ETDEWEB)

    Santi, Peter Angelo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cutler, Theresa Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favalli, Andrea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Koehler, Katrina Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzlova, Daniela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parker, Robert Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Croft, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  5. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  6. A new methodology of spatial cross-correlation analysis.

    Science.gov (United States)

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran's index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson's correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China's urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes.

  7. Investigating the correlation between wastewater analysis and roadside drug testing in South Australia.

    Science.gov (United States)

    Bade, Richard; Tscharke, Benjamin J; Longo, Marie; Cooke, Richard; White, Jason M; Gerber, Cobus

    2018-04-10

    The societal impact of drug use is well known. An example is when drug-intoxicated drivers increase the burden on policing and healthcare services. This work presents the correlation of wastewater analysis (using UHPLC-MS/MS) and positive roadside drug testing results for methamphetamine, 3,4-methylenedioxymethamphetamine (MDMA) and cannabis from December 2011-December 2016 in South Australia. Methamphetamine and MDMA showed similar trends between the data sources with matching increases and decreases, respectively. Cannabis was relatively steady based on wastewater analysis, but the roadside drug testing data started to diverge in the final part of the measurement period. The ability to triangulate data as shown here validates both wastewater analysis and roadside drug testing. This suggests that changes in overall population drug use revealed by WWA is consistent and proportional with changes in drug-driving behaviours. The results show that, at higher levels of drug use as measured by wastewater analysis, there is an increase in drug driving in the community and therefore more strain on health services and police. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    Science.gov (United States)

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  9. An Analysis of Cross Racial Identity Scale Scores Using Classical Test Theory and Rasch Item Response Models

    Science.gov (United States)

    Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie

    2013-01-01

    Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…

  10. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    Science.gov (United States)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these

  11. Test and Analysis Correlation for a Y-Joint Specimen for a Composite Cryotank

    Science.gov (United States)

    Mason, Brian H.; Sleight, David W.; Grenoble, Ray

    2015-01-01

    The Composite Cryotank Technology Demonstration (CCTD) project under NASA's Game Changing Development Program (GCDP) developed space technologies using advanced composite materials. Under CCTD, NASA funded the Boeing Company to design and test a number of element-level joint specimens as a precursor to a 2.4-m diameter composite cryotank. Preliminary analyses indicated that the y-joint in the cryotank had low margins of safety; hence the y-joint was considered to be a critical design region. The y-joint design includes a softening strip wedge to reduce localized shear stresses at the skirt/dome interface. In this paper, NASA-developed analytical models will be correlated with the experimental results of a series of positive-peel y-joint specimens from Boeing tests. Initial analytical models over-predicted the experimental strain gage readings in the far-field region by approximately 10%. The over-prediction was attributed to uncertainty in the elastic properties of the laminate and a mismatch between the thermal expansion of the strain gages and the laminate. The elastic properties of the analytical model were adjusted to account for the strain gage differences. The experimental strain gages also indicated a large non-linear effect in the softening strip region that was not predicted by the analytical model. This non-linear effect was attributed to delamination initiating in the softening strip region at below 20% of the failure load for the specimen. Because the specimen was contained in a thermally insulated box during cryogenic testing to failure, delamination initiation and progression was not visualized during the test. Several possible failure initiation locations were investigated, and a most likely failure scenario was determined that correlated well with the experimental data. The most likely failure scenario corresponded to damage initiating in the softening strip and delamination extending to the grips at final failure.

  12. A New Methodology of Spatial Cross-Correlation Analysis

    Science.gov (United States)

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran’s index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson’s correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China’s urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes. PMID:25993120

  13. Modelling Multivariate Autoregressive Conditional Heteroskedasticity with the Double Smooth Transition Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new Double Smooth Transition Conditional Correlation GARCH model extends the Smooth Transition Conditional Correlation GARCH model of Silvennoinen and Ter¨asvirta (2005) by including...... another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition......, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. The model is applied to a selection of world stock indices, and it is found that time is an important factor affecting...

  14. Approximate models for the analysis of laser velocimetry correlation functions

    International Nuclear Information System (INIS)

    Robinson, D.P.

    1981-01-01

    Velocity distributions in the subchannels of an eleven pin test section representing a slice through a Fast Reactor sub-assembly were measured with a dual beam laser velocimeter system using a Malvern K 7023 digital photon correlator for signal processing. Two techniques were used for data reduction of the correlation function to obtain velocity and turbulence values. Whilst both techniques were in excellent agreement on the velocity, marked discrepancies were apparent in the turbulence levels. As a consequence of this the turbulence data were not reported. Subsequent investigation has shown that the approximate technique used as the basis of Malvern's Data Processor 7023V is restricted in its range of application. In this note alternative approximate models are described and evaluated. The objective of this investigation was to develop an approximate model which could be used for on-line determination of the turbulence level. (author)

  15. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  16. Nonlinear Analysis and Post-Test Correlation for a Curved PRSEUS Panel

    Science.gov (United States)

    Gould, Kevin; Lovejoy, Andrew E.; Jegley, Dawn; Neal, Albert L.; Linton, Kim, A.; Bergan, Andrew C.; Bakuckas, John G., Jr.

    2013-01-01

    The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept, developed by The Boeing Company, has been extensively studied as part of the National Aeronautics and Space Administration's (NASA s) Environmentally Responsible Aviation (ERA) Program. The PRSEUS concept provides a light-weight alternative to aluminum or traditional composite design concepts and is applicable to traditional-shaped fuselage barrels and wings, as well as advanced configurations such as a hybrid wing body or truss braced wings. Therefore, NASA, the Federal Aviation Administration (FAA) and The Boeing Company partnered in an effort to assess the performance and damage arrestments capabilities of a PRSEUS concept panel using a full-scale curved panel in the FAA Full-Scale Aircraft Structural Test Evaluation and Research (FASTER) facility. Testing was conducted in the FASTER facility by subjecting the panel to axial tension loads applied to the ends of the panel, internal pressure, and combined axial tension and internal pressure loadings. Additionally, reactive hoop loads were applied to the skin and frames of the panel along its edges. The panel successfully supported the required design loads in the pristine condition and with a severed stiffener. The panel also demonstrated that the PRSEUS concept could arrest the progression of damage including crack arrestment and crack turning. This paper presents the nonlinear post-test analysis and correlation with test results for the curved PRSEUS panel. It is shown that nonlinear analysis can accurately calculate the behavior of a PRSEUS panel under tension, pressure and combined loading conditions.

  17. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    Science.gov (United States)

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  18. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  19. Analysis and test of a 16-foot radial rib reflector developmental model

    Science.gov (United States)

    Birchenough, Shawn A.

    1989-01-01

    Analytical and experimental modal tests were performed to determine the vibrational characteristics of a 16-foot diameter radial rib reflector model. Single rib analyses and experimental tests provided preliminary information relating to the reflector. A finite element model predicted mode shapes and frequencies of the reflector. The analyses correlated well with the experimental tests, verifying the modeling method used. The results indicate that five related, characteristic mode shapes form a group. The frequencies of the modes are determined by the relative phase of the radial ribs.

  20. Implementation of DSC model and application for analysis of field pile tests under cyclic loading

    Science.gov (United States)

    Shao, Changming; Desai, Chandra S.

    2000-05-01

    The disturbed state concept (DSC) model, and a new and simplified procedure for unloading and reloading behavior are implemented in a nonlinear finite element procedure for dynamic analysis for coupled response of saturated porous materials. The DSC model is used to characterize the cyclic behavior of saturated clays and clay-steel interfaces. In the DSC, the relative intact (RI) behavior is characterized by using the hierarchical single surface (HISS) plasticity model; and the fully adjusted (FA) behavior is modeled by using the critical state concept. The DSC model is validated with respect to laboratory triaxial tests for clay and shear tests for clay-steel interfaces. The computer procedure is used to predict field behavior of an instrumented pile subjected to cyclic loading. The predictions provide very good correlation with the field data. They also yield improved results compared to those from a HISS model with anisotropic hardening, partly because the DSC model allows for degradation or softening and interface response.

  1. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  2. 0.1 Trend analysis of δ18O composition of precipitation in Germany: Combining Mann-Kendall trend test and ARIMA models to correct for higher order serial correlation

    Science.gov (United States)

    Klaus, Julian; Pan Chun, Kwok; Stumpp, Christine

    2015-04-01

    Spatio-temporal dynamics of stable oxygen (18O) and hydrogen (2H) isotopes in precipitation can be used as proxies for changing hydro-meteorological and regional and global climate patterns. While spatial patterns and distributions gained much attention in recent years the temporal trends in stable isotope time series are rarely investigated and our understanding of them is still limited. These might be a result of a lack of proper trend detection tools and effort for exploring trend processes. Here we make use of an extensive data set of stable isotope in German precipitation. In this study we investigate temporal trends of δ18O in precipitation at 17 observation station in Germany between 1978 and 2009. For that we test different approaches for proper trend detection, accounting for first and higher order serial correlation. We test if significant trends in the isotope time series based on different models can be observed. We apply the Mann-Kendall trend tests on the isotope series, using general multiplicative seasonal autoregressive integrate moving average (ARIMA) models which account for first and higher order serial correlations. With the approach we can also account for the effects of temperature, precipitation amount on the trend. Further we investigate the role of geographic parameters on isotope trends. To benchmark our proposed approach, the ARIMA results are compared to a trend-free prewhiting (TFPW) procedure, the state of the art method for removing the first order autocorrelation in environmental trend studies. Moreover, we explore whether higher order serial correlations in isotope series affects our trend results. The results show that three out of the 17 stations have significant changes when higher order autocorrelation are adjusted, and four stations show a significant trend when temperature and precipitation effects are considered. Significant trends in the isotope time series are generally observed at low elevation stations (≤315 m a

  3. Modelling conditional correlations of asset returns: A smooth transition approach

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM-test is d......In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM......-test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of ve...

  4. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    Science.gov (United States)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  5. Importance analysis for models with correlated variables and its sparse grid solution

    International Nuclear Information System (INIS)

    Li, Luyi; Lu, Zhenzhou

    2013-01-01

    For structural models involving correlated input variables, a novel interpretation for variance-based importance measures is proposed based on the contribution of the correlated input variables to the variance of the model output. After the novel interpretation of the variance-based importance measures is compared with the existing ones, two solutions of the variance-based importance measures of the correlated input variables are built on the sparse grid numerical integration (SGI): double-loop nested sparse grid integration (DSGI) method and single loop sparse grid integration (SSGI) method. The DSGI method solves the importance measure by decreasing the dimensionality of the input variables procedurally, while SSGI method performs importance analysis through extending the dimensionality of the inputs. Both of them can make full use of the advantages of the SGI, and are well tailored for different situations. By analyzing the results of several numerical and engineering examples, it is found that the novel proposed interpretation about the importance measures of the correlated input variables is reasonable, and the proposed methods for solving importance measures are efficient and accurate. -- Highlights: •The contribution of correlated variables to the variance of the output is analyzed. •A novel interpretation for variance-based indices of correlated variables is proposed. •Two solutions for variance-based importance measures of correlated variables are built

  6. Computer-aided-engineering system for modeling and analysis of ECLSS integration testing

    Science.gov (United States)

    Sepahban, Sonbol

    1987-01-01

    The accurate modeling and analysis of two-phase fluid networks found in environmental control and life support systems is presently undertaken by computer-aided engineering (CAE) techniques whose generalized fluid dynamics package can solve arbitrary flow networks. The CAE system for integrated test bed modeling and analysis will also furnish interfaces and subsystem/test-article mathematical models. Three-dimensional diagrams of the test bed are generated by the system after performing the requisite simulation and analysis.

  7. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  8. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  9. FEM correlation and shock analysis of a VNC MEMS mirror segment

    Science.gov (United States)

    Aguayo, Eduardo J.; Lyon, Richard; Helmbrecht, Michael; Khomusi, Sausan

    2014-08-01

    Microelectromechanical systems (MEMS) are becoming more prevalent in today's advanced space technologies. The Visible Nulling Coronagraph (VNC) instrument, being developed at the NASA Goddard Space Flight Center, uses a MEMS Mirror to correct wavefront errors. This MEMS Mirror, the Multiple Mirror Array (MMA), is a key component that will enable the VNC instrument to detect Jupiter and ultimately Earth size exoplanets. Like other MEMS devices, the MMA faces several challenges associated with spaceflight. Therefore, Finite Element Analysis (FEA) is being used to predict the behavior of a single MMA segment under different spaceflight-related environments. Finite Element Analysis results are used to guide the MMA design and ensure its survival during launch and mission operations. A Finite Element Model (FEM) has been developed of the MMA using COMSOL. This model has been correlated to static loading on test specimens. The correlation was performed in several steps—simple beam models were correlated initially, followed by increasingly complex and higher fidelity models of the MMA mirror segment. Subsequently, the model has been used to predict the dynamic behavior and stresses of the MMA segment in a representative spaceflight mechanical shock environment. The results of the correlation and the stresses associated with a shock event are presented herein.

  10. Analysis of the forced vibration test of the Hualien large scale soil-structure interaction model using a flexible volume substructuring method

    International Nuclear Information System (INIS)

    Tang, H.T.; Nakamura, N.

    1995-01-01

    A 1/4-scale cylindrical reactor containment model was constructed in Hualien, Taiwan for foil-structure interaction (SSI) effect evaluation and SSI analysis procedure verification. Forced vibration tests were executed before backfill (FVT-1) and after backfill (FVT-2) to characterize soil-structure system characteristics under low excitations. A number of organizations participated in the pre-test blind prediction and post-test correlation analyses of the forced vibration test using various industry familiar methods. In the current study, correlation analyses were performed using a three-dimensional flexible volume substructuring method. The results are reported and soil property sensitivities are evaluated in the paper. (J.P.N.)

  11. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    Science.gov (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  12. Topic Correlation Analysis for Bearing Fault Diagnosis Under Variable Operating Conditions

    Science.gov (United States)

    Chen, Chao; Shen, Fei; Yan, Ruqiang

    2017-05-01

    This paper presents a Topic Correlation Analysis (TCA) based approach for bearing fault diagnosis. In TCA, Joint Mixture Model (JMM), a model which adapts Probability Latent Semantic Analysis (PLSA), is constructed first. Then, JMM models the shared and domain-specific topics using “fault vocabulary” . After that, the correlations between two kinds of topics are computed and used to build a mapping matrix. Furthermore, a new shared space spanned by the shared and mapped domain-specific topics is set up where the distribution gap between different domains is reduced. Finally, a classifier is trained with mapped features which follow a different distribution and then the trained classifier is tested on target bearing data. Experimental results justify the superiority of the proposed approach over the stat-of-the-art baselines and it can diagnose bearing fault efficiently and effectively under variable operating conditions.

  13. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    Science.gov (United States)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  14. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  15. Thermal and Fluid Modeling of the CRYogenic Orbital TEstbed (CRYOTE) Ground Test Article (GTA)

    Science.gov (United States)

    Piryk, David; Schallhorn, Paul; Walls, Laurie; Stopnitzky, Benny; Rhys, Noah; Wollen, Mark

    2012-01-01

    The purpose of this study was to anchor thermal and fluid system models to data acquired from a ground test article (GTA) for the CRYogenic Orbital TEstbed - CRYOTE. To accomplish this analysis, it was broken into four primary tasks. These included model development, pre-test predictions, testing support at Marshall Space Flight Center (MSFC} and post-test correlations. Information from MSFC facilitated the task of refining and correlating the initial models. The primary goal of the modeling/testing/correlating efforts was to characterize heat loads throughout the ground test article. Significant factors impacting the heat loads included radiative environments, multi-layer insulation (MLI) performance, tank fill levels, tank pressures, and even contact conductance coefficients. This paper demonstrates how analytical thermal/fluid networks were established, and it includes supporting rationale for specific thermal responses seen during testing.

  16. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The load history was obtained from an ICECO analysis, using the equations of state for the source and the water. A detailed check of this solution was made to assure that the derived loading did provide the correct input. The DYNAPCON code was then used for the analysis of the prestressed concrete containment model. This analysis required the simulation of prestressing and the response of the model to the applied transient load. The calculations correctly predict the magnitudes of displacements of the PCRV model. In addition, the displacement time histories obtained from the calculations reproduce the general features of the experimental records: the period elongation and amplitude increase as compared to an elastic solution, and also the absence of permanent displacement. However, the period still underestimates the experiment, while the amplitude is generally somewhat large

  17. Testing power-law cross-correlations: Rescaled covariance test

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2013-01-01

    Roč. 86, č. 10 (2013), 418-1-418-15 ISSN 1434-6028 R&D Projects: GA ČR GA402/09/0965 Institutional support: RVO:67985556 Keywords : power-law cross-correlations * testing * long-term memory Subject RIV: AH - Economics Impact factor: 1.463, year: 2013 http://library.utia.cas.cz/separaty/2013/E/kristoufek-testing power-law cross-correlations rescaled covariance test.pdf

  18. MARS-LMR modeling for the post-test analysis of Phenix End-of-Life natural circulation

    International Nuclear Information System (INIS)

    Jeong, Hae Yong; Ha, Kwi Seok; Chang, Won Pyo; Lee, Kwi Lim

    2011-01-01

    For a successful design and analysis of Sodium cooled Fast Reactor (SFR), it is required to have a reliable and well-proven system analysis code. To achieve this purpose, KAERI is enhancing the modeling capability of MARS code by adding the SFR-specific models such as pressure drop model, heat transfer model and reactivity feedback model. This version of MARS-LMR will be used as a basic tool in the design and analysis of future SFR systems in Korea. Before wide application of MARS-LMR code, it is required to verify and validate the code models through analyses for appropriate experimental data or analytical results. The end-of-life test of Phenix reactor performed by the CEA provided a unique opportunity to have reliable test data which is very valuable in the validation and verification of a SFR system analysis code. The KAERI joined this international program of the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main test of natural circulation was completed in 2009. Before the test the KAERI performed the pre-test analysis based on the design condition provided by the CEA. Then, the blind post-test analysis was also performed based on the test conditions measured during the test before the CEA provide the final test results. Finally, the final post-test analysis was performed recently to predict the test results as accurate as possible. This paper introduces the modeling approach of the MARS-LMR used in the final post-test analysis and summarizes the major results of the analysis

  19. Pulmonary function tests correlated with thoracic volumes in adolescent idiopathic scoliosis.

    Science.gov (United States)

    Ledonio, Charles Gerald T; Rosenstein, Benjamin E; Johnston, Charles E; Regelmann, Warren E; Nuckley, David J; Polly, David W

    2017-01-01

    Scoliosis deformity has been linked with deleterious changes in the thoracic cavity that affect pulmonary function. The causal relationship between spinal deformity and pulmonary function has yet to be fully defined. It has been hypothesized that deformity correction improves pulmonary function by restoring both respiratory muscle efficiency and increasing the space available to the lungs. This research aims to correlate pulmonary function and thoracic volume before and after scoliosis correction. Retrospective correlational analysis between thoracic volume modeling from plain x-rays and pulmonary function tests was conducted. Adolescent idiopathic scoliosis patients enrolled in a multicenter database were sorted by pre-operative Total Lung Capacities (TLC) % predicted values from their Pulmonary Function Tests (PFT). Ten patients with the best and ten patients with the worst TLC values were included. Modeled thoracic volume and TLC values were compared before and 2 years after surgery. Scoliosis correction resulted in an increase in the thoracic volume for patients with the worst initial TLCs (11.7%) and those with the best initial TLCs (12.5%). The adolescents with the most severe pulmonary restriction prior to surgery strongly correlated with post-operative change in total lung capacity and thoracic volume (r 2  = 0.839; p volume in this group was 373.1 cm 3 (11.7%) which correlated with a 21.2% improvement in TLC. Scoliosis correction in adolescents was found to increase thoracic volume and is strongly correlated with improved TLC in cases with severe restrictive pulmonary function, but no correlation was found in cases with normal pulmonary function. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:175-182, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  20. Two-dimensional multifractal cross-correlation analysis

    International Nuclear Information System (INIS)

    Xi, Caiping; Zhang, Shuning; Xiong, Gang; Zhao, Huichang; Yang, Yonghong

    2017-01-01

    Highlights: • We study the mathematical models of 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Present the definition of the two-dimensional N 2 -partitioned multiplicative cascading process. • Do the comparative analysis of 2D-MC by 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Provide a reference on the choice and parameter settings of these methods in practice. - Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. This paper presents two-dimensional multifractal cross-correlation analysis based on the partition function (2D-MFXPF), two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) and two-dimensional multifractal cross-correlation analysis based on the detrended moving average analysis (2D-MFXDMA). We apply these methods to pairs of two-dimensional multiplicative cascades (2D-MC) to do a comparative study. Then, we apply the two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) to real images and unveil intriguing multifractality in the cross correlations of the material structures. At last, we give the main conclusions and provide a valuable reference on how to choose the multifractal algorithms in the potential applications in the field of SAR image classification and detection.

  1. A test of conformal invariance: Correlation functions on a disk

    International Nuclear Information System (INIS)

    Badke, R.; Rittenberg, V.; Ruegg, H.

    1985-06-01

    Using conformal invariance one can derive the correlation functions of a disk from those in the half-plane. The correlation function in the half-plane is determined by the 'small' conformal invariance up to an unknown function of one variable. By measuring through the Monte Carlo method the correlation function for two different configurations, the unknown function can be eliminated and one obtains a test of conformal invariance. It is shown that the Ising and the three state Potts model pass the test for very small lattices. (orig.)

  2. Analysis of the existing correlations of effective friction angle for eastern piedmont soils of Bogota from in situ tests

    Directory of Open Access Journals (Sweden)

    July E. Carmona-Álvarez

    2015-07-01

    Full Text Available To estimate the effective friction angle of soil from in situ test is a complicated job, due to high rates of strain existing in this kind of tests, which tend to be too invasive and disturb the vicinities of test depth, even the sample that eventually is taken at the site. Likewise, the most of the correlations found in the current bibliography to obtain the effective friction angle using field tests, have been developed for soils from different regions. For that reason when are implemented on tropical soils present high scatter, to compare the field parameter values with real results obtained at the lab. This research aims to use in situ tests define through of analysis of different correlations, which fits adequately to the specific conditions of the piedmont soils of Bogota. For the present study will be utilized data from SPT (widely used in Colombia and SPT-T (never before conducted in the country, carried out considering the appropriated norms to each test, taking in account to SPT-T, doesn’t exist local standard governing such tests. The correlations for field procedures of the tests implemented were for effective confining and energy transference of the SPT hammer, since the state-of-the-art mentions it as the most affect the reliability of the final results. The final results show the tendency of the methodologies used to obtain the correlation, in relation with the real value of effective friction angle from of lab tests.

  3. Systematic reviews: I. The correlation between laboratory tests on marginal quality and bond strength. II. The correlation between marginal quality and clinical outcome.

    Science.gov (United States)

    Heintze, Siegward D

    2007-01-01

    An accepted principle in restorative dentistry states that restorations should be placed with the best marginal quality possible to avoid postoperative sensitivity, marginal discoloration, and secondary caries. Different laboratory methods claim to predict the clinical performance of restorative materials, for example, tests of bond strength and microleakage and gap analysis. The purpose of this review was twofold: (1) find studies that correlated the results of bond strength tests with either microleakage or gap analysis for the same materials, and (2) find studies that correlated the results of microleakage and/or gaps with the clinical parameters for the same materials. Furthermore, influencing factors on the results of the laboratory tests were reviewed and assessed. For the first question, searches for studies were conducted in the MEDLINE database and IADR/AADR abtracts online with specific search and inclusion criteria. The outcome for each study was assessed on the basis of the statistical test applied in the study, and finally the number of studies with or without correlation was compiled. For the second question, results of the quantitative marginal analysis of Class V restorations published by the University of Zürich with the same test protocol and prospective clinical trials were searched that investigated the same materials for at least 2 years in Class V cavities. Pearson correlation coefficients were calculated for pooled data of materials and clinical outcome parameters such as retention loss, marginal discoloration, marginal integrity, and secondary caries. For the correlation of dye penetration and clinical outcome, studies on Class V restorations published by the same research institute were searched in MEDLINE that examined the same adhesive systems as the selected clinical trials. For the correlation bond strength/microleakage, 30 studies were included into the review, and for the correlation bond strength/gap analysis 18 studies. For both

  4. Error analysis of supercritical water correlations using ATHLET system code under DHT conditions

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, J., E-mail: jeffrey.samuel@uoit.ca [Univ. of Ontario Inst. of Tech., Oshawa, ON (Canada)

    2014-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is used for analysis of anticipated and abnormal plant transients, including safety analysis of Light Water Reactors (LWRs) and Russian Graphite-Moderated High Power Channel-type Reactors (RBMKs). The range of applicability of ATHLET has been extended to supercritical water by updating the fluid-and transport-properties packages, thus enabling the code to the used in analysis of SuperCritical Water-cooled Reactors (SCWRs). Several well-known heat-transfer correlations for supercritical fluids were added to the ATHLET code and a numerical model was created to represent an experimental test section. In this work, the error in the Heat Transfer Coefficient (HTC) calculation by the ATHLET model is studied along with the ability of the various correlations to predict different heat transfer regimes. (author)

  5. Correlation of numerical and experimental analysis for dynamic behaviour of a body-in-white (BIW structure

    Directory of Open Access Journals (Sweden)

    Abdullah N.A.Z.

    2017-01-01

    Full Text Available In order to determine the reliability of data gathered using computational version of finite element analysis, experimental data is often used for validation. In case of finite element analysis, it can sometimes be considered as inaccurate especially when subjected to complex and large structure such as body-in-shite. This is due to difficulties that might occur in modelling of joints, boundary conditions and damping of the structure. In this study, a process of comparison and validation of model based test design with modal testing results was conducted. Modal properties (natural frequencies, mode shapes, and damping ratio of a body-in-white (BIW structure were determined using both experimental modal analysis (EMA and finite element analysis (FEA. Correlation of both sets of data was performed for validation. It appeared that there was significant value of error between those two sets of data. The discrepancies that appear after correlation was then reduced by performing model updating procedure. The results presented here may demonstrate the effectiveness of model updating technique on improving the complex structure such as BIW structure.

  6. Structural Analysis of Covariance and Correlation Matrices.

    Science.gov (United States)

    Joreskog, Karl G.

    1978-01-01

    A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…

  7. Friction correction for model ship resistance and propulsion tests in ice at NRC's OCRE-RC

    Directory of Open Access Journals (Sweden)

    Michael Lau

    2018-05-01

    Full Text Available This paper documents the result of a preliminary analysis on the influence of hull-ice friction coefficient on model resistance and power predictions and their correlation to full-scale measurements. The study is based on previous model-scale/full-scale correlations performed on the National Research Council - Ocean, Coastal, and River Engineering Research Center's (NRC/OCRE-RC model test data. There are two objectives for the current study: (1 to validate NRC/OCRE-RC's modeling standards in regarding to its practice of specifying a CFC (Correlation Friction Coefficient of 0.05 for all its ship models; and (2 to develop a correction methodology for its resistance and propulsion predictions when the model is prepared with an ice friction coefficient slightly deviated from the CFC of 0.05. The mean CFC of 0.056 and 0.050 for perfect correlation as computed from the resistance and power analysis, respectively, have justified NRC/OCRE-RC's selection of 0.05 for the CFC of all its models. Furthermore, a procedure for minor friction corrections is developed. Keywords: Model test, Ice resistance, Power, Friction correction, Correlation friction coefficient

  8. Biologic variability and correlation of platelet function testing in healthy dogs.

    Science.gov (United States)

    Blois, Shauna L; Lang, Sean T; Wood, R Darren; Monteith, Gabrielle

    2015-12-01

    Platelet function tests are influenced by biologic variability, including inter-individual (CVG ) and intra-individual (CVI ), as well as analytic (CVA ) variability. Variability in canine platelet function testing is unknown, but if excessive, would make it difficult to interpret serial results. Additionally, the correlation between platelet function tests is poor in people, but not well described in dogs. The aims were to: (1) identify the effect of variation in preanalytic factors (venipuncture, elapsed time until analysis) on platelet function tests; (2) calculate analytic and biologic variability of adenosine diphosphate (ADP) and arachidonic acid (AA)-induced thromboelastograph platelet mapping (TEG-PM), ADP-, AA-, and collagen-induced whole blood platelet aggregometry (WBA), and collagen/ADP and collagen/epinephrine platelet function analysis (PFA-CADP, PFA-CEPI); and (3) determine the correlation between these variables. In this prospective observational trial, platelet function was measured once every 7 days, for 4 consecutive weeks, in 9 healthy dogs. In addition, CBC, TEG-PM, WBA, and PFA were performed. Overall coefficients of variability ranged from 13.3% to 87.8% for the platelet function tests. Biologic variability was highest for AA-induced maximum amplitude generated during TEG-PM (MAAA; CVG = 95.3%, CVI = 60.8%). Use of population-based reference intervals (RI) was determined appropriate only for PFA-CADP (index of individuality = 10.7). There was poor correlation between most platelet function tests. Use of population-based RI appears inappropriate for most platelet function tests, and tests poorly correlate with one another. Future studies on biologic variability and correlation of platelet function tests should be performed in dogs with platelet dysfunction and those treated with antiplatelet therapy. © 2015 American Society for Veterinary Clinical Pathology.

  9. Can human resources induce sustainability in business?: Modeling, testing and correlating HR index and company's business results

    Directory of Open Access Journals (Sweden)

    Zubović Jovan

    2015-01-01

    Full Text Available Abstract In this paper the authors analyze the impact of the composite human resource index on sustainable growth in a specific business sector in a transition country. Sustainability of country's economy is growingly relying on the knowledge economy which has been implemented in strategies of sustainable development throughout Europe. The knowledge economy is mostly based on human resources and the way they are organized and managed in the companies actively operating in competitive markets. In order to confirm importance of the human resources (HR index, results were tested by means of modeling, measuring and correlating the HR index with business results at micro level. The tests were conducted on the data from the survey in Serbian meat processing industry. The results were then compared with the results from the survey conducted in a financial industry. Moreover, a model was made that could be applicable in all countries that do not have available official statistic data on the level of investments in human resources. The focus was on determining the correlation direction, and hence creating a research model applicable in all business sectors. It has been found that a significant one-way correlation exists between business performance and increased HR index. In that way it has been confirmed that in Serbian economy that has recorded global decrease during transition, certain business sectors, and especially companies with high levels of investments in improving its HR index record above average and sustainable growth.

  10. Kolmogorov-Smirnov test for spatially correlated data

    Science.gov (United States)

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  11. Generalized canonical correlation analysis with missing values

    NARCIS (Netherlands)

    M. van de Velden (Michel); Y. Takane

    2009-01-01

    textabstractTwo new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach,

  12. Particle-particle correlations and lifetimes of composite nuclei: New tests for the evaporation model and for statistical equilibration

    International Nuclear Information System (INIS)

    DeYoung, P.A.; Gelderloos, C.J.; Kortering, D.; Sarafa, J.; Zienert, K.; Gordon, M.S.; Fineman, B.J.; Gilfoyle, G.P.; Lu, X.; McGrath, R.L.; de Castro Rizzo, D.M.; Alexander, J.M.; Auger, G.; Kox, S.; Vaz, L.C.; Beck, C.; Henderson, D.J.; Kovar, D.G.; Vineyard, M.F.; Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794; Department of Chemistry, State University of New York at Stony Brook, Stony Brook, New York 11794; Argonne National Laboratory, Argonne, Illinois 60439)

    1990-01-01

    We present data for small-angle particle-particle correlations from the reactions 80, 140, 215, and 250 MeV 16 O+ 27 Al→p-p or p-d. The main features of these data are anticorrelations for small relative momenta (≤25 MeV/c) that strengthen with increasing bombarding energy. Statistical model calculations have been performed to predict the mean lifetimes for each step of evaporative decay, and then simulate the trajectories of the particle pairs and the resulting particle correlations. This simulation accounts very well for the trends of the data and can provide an important new test for the hypothesis of equilibration on which the model is built

  13. Information Theory for Correlation Analysis and Estimation of Uncertainty Reduction in Maps and Models

    Directory of Open Access Journals (Sweden)

    J. Florian Wellmann

    2013-04-01

    Full Text Available The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are applied for a detailed analysis of spatial uncertainty correlations. The aim is to determine (i which areas in a spatial analysis share information, and (ii where, and by how much, additional information would reduce uncertainties. As an illustration, a typical geological example is evaluated: the case of a subsurface layer with uncertain depth, shape and thickness. Mutual information and multivariate conditional entropies are determined based on multiple simulated model realisations. Even for this simple case, the measures not only provide a clear picture of uncertainties and their correlations but also give detailed insights into the potential reduction of uncertainties at each position, given additional information at a different location. The methods are directly applicable to other types of spatial uncertainty evaluations, especially where multiple realisations of a model simulation are analysed. In summary, the application of information theoretic measures opens up the path to a better understanding of spatial uncertainties, and their relationship to information and prior knowledge, for cases where uncertain property distributions are spatially analysed and visualised in maps and models.

  14. Analysis and model testing of Super Tiger Type B packaging in accident environments

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Romesberg, L.E.; May, R.A.; Joseph, B.J.

    1980-01-01

    Based on previous scale model test results with more rigid systems and the subsystem tests on drums, it is believed that the scaled models realistically replicate full scale system behavior. Future work will be performed to obtain improved stiffness data on the Type A containers. These data will be incorporated into the finite element model, and improved correlation with the test results is expected. Review of the scale model transport system test results indicated that the method of attachment of the Super Tiger to the trailer was the primary cause for detachment of the outer door during the one-eighth scale grade-crossing test. Although the container seal on the scale model of Super Tiger was not adequately modeled to provide a leak-tight seal, loss of the existing seal in a full scale test can be inferred from the results of the one-quarter scale model grade-crossing test. In each test, approximately two-thirds of the model drums were estimated to have deformed sufficiently to predict loss of drum head closure seal, with several partially losing their contents within the overpack. In no case were drums ejected from the overpack, nor was there evidence of material loss in excess of the amount assumed in the WIPP EIS from any of the Super Tiger models tested. 9 figures

  15. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  16. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    Science.gov (United States)

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Analysis of correlations between sites in models of protein sequences

    International Nuclear Information System (INIS)

    Giraud, B.G.; Lapedes, A.; Liu, L.C.

    1998-01-01

    A criterion based on conditional probabilities, related to the concept of algorithmic distance, is used to detect correlated mutations at noncontiguous sites on sequences. We apply this criterion to the problem of analyzing correlations between sites in protein sequences; however, the analysis applies generally to networks of interacting sites with discrete states at each site. Elementary models, where explicit results can be derived easily, are introduced. The number of states per site considered ranges from 2, illustrating the relation to familiar classical spin systems, to 20 states, suitable for representing amino acids. Numerical simulations show that the criterion remains valid even when the genetic history of the data samples (e.g., protein sequences), as represented by a phylogenetic tree, introduces nonindependence between samples. Statistical fluctuations due to finite sampling are also investigated and do not invalidate the criterion. A subsidiary result is found: The more homogeneous a population, the more easily its average properties can drift from the properties of its ancestor. copyright 1998 The American Physical Society

  18. Quantitative CT analysis of honeycombing area in idiopathic pulmonary fibrosis: Correlations with pulmonary function tests.

    Science.gov (United States)

    Nakagawa, Hiroaki; Nagatani, Yukihiro; Takahashi, Masashi; Ogawa, Emiko; Tho, Nguyen Van; Ryujin, Yasushi; Nagao, Taishi; Nakano, Yasutaka

    2016-01-01

    The 2011 official statement of idiopathic pulmonary fibrosis (IPF) mentions that the extent of honeycombing and the worsening of fibrosis on high-resolution computed tomography (HRCT) in IPF are associated with the increased risk of mortality. However, there are few reports about the quantitative computed tomography (CT) analysis of honeycombing area. In this study, we first proposed a computer-aided method for quantitative CT analysis of honeycombing area in patients with IPF. We then evaluated the correlations between honeycombing area measured by the proposed method with that estimated by radiologists or with parameters of PFTs. Chest HRCTs and pulmonary function tests (PFTs) of 36 IPF patients, who were diagnosed using HRCT alone, were retrospectively evaluated. Two thoracic radiologists independently estimated the honeycombing area as Identified Area (IA) and the percentage of honeycombing area to total lung area as Percent Area (PA) on 3 axial CT slices for each patient. We also developed a computer-aided method to measure the honeycombing area on CT images of those patients. The total honeycombing area as CT honeycombing area (HA) and the percentage of honeycombing area to total lung area as CT %honeycombing area (%HA) were derived from the computer-aided method for each patient. HA derived from three CT slices was significantly correlated with IA (ρ=0.65 for Radiologist 1 and ρ=0.68 for Radiologist 2). %HA derived from three CT slices was also significantly correlated with PA (ρ=0.68 for Radiologist 1 and ρ=0.70 for Radiologist 2). HA and %HA derived from all CT slices were significantly correlated with FVC (%pred.), DLCO (%pred.), and the composite physiologic index (CPI) (HA: ρ=-0.43, ρ=-0.56, ρ=0.63 and %HA: ρ=-0.60, ρ=-0.49, ρ=0.69, respectively). The honeycombing area measured by the proposed computer-aided method was correlated with that estimated by expert radiologists and with parameters of PFTs. This quantitative CT analysis of

  19. Genetic analysis of somatic cell score in Danish dairy cattle using ramdom regression test-day model

    DEFF Research Database (Denmark)

    Elsaid, Reda; Sabry, Ayman; Lund, Mogens Sandø

    2011-01-01

    ,233 Danish Holstein cows, were extracted from the national milk recording database. Each data set was analyzed with random regression models using AI-REML. Fixed effects in all models were age at first calving, herd test day, days carrying calf, effects of germ plasm importation (e.g. additive breed effects......) and low between the beginning and the end of lactation. The estimated environmental correlations were lower than the genetic correlations, but the trends were similar. Based on test-day records, the accuracy of genetic evaluations for SCC should be improved when the variation in heritabilities...

  20. Analysis and model-tests on vortex-induced oscillation of bridges; Kyoryo no uzu reishin ni kansuru sanjigen oto kaiseki to fudo jikken

    Energy Technology Data Exchange (ETDEWEB)

    Yamamura, N. [Hitachi Zosen Corp., Osaka (Japan); Ogasawara, M. [Kansai Electric Power Co. Inc., Osaka (Japan); Shiraishi, N. [Maizuru College of Technology, Kyoto (Japan); Nanjo, M.

    1996-07-21

    In order to predict the three-dimensional response to vortex-induced oscillation of bridges, a model was investigated using the aerodynamic force coefficient including vortex-induced and self-excited forces, and the nonlinear response coefficient expressing constancy of response to vortex-induced oscillation. In the analysis, change of frequency in wind by the self-excited force, aerodynamic damping term, effect of the mode, and correlation of the vortex-induced force along member axis were taken into account. The aerodynamic force and nonlinear response coefficients were identified from the homogeneous and turbulent flow results of wind tunnel tests using a two-dimensional spring support rigid body model with varied damping factor. The aerodynamic damping term can be estimated from the nonlinear aerodynamic force coefficient, but it was enough to calculate it from the quasi-stationary coefficient in general bridge profiles. The correlation of the vortex-induced force was obtained from measurements of the vertical variation components of trailing flow under the resonance state, or the pressure distribution of the member surface. When comparing to the wind tunnel test of three-dimensional model of cable-stayed bridge, the response amplitude by the present analysis method was consistent well with the test results rather than by the method in which the amplitude of two-dimensional model was corrected. 10 refs., 4 figs., 3 tabs.

  1. Characterization of zinc alloy by sheet bulging test with analytical models and digital image correlation

    Science.gov (United States)

    Vitu, L.; Laforge, N.; Malécot, P.; Boudeau, N.; Manov, S.; Milesi, M.

    2018-05-01

    Zinc alloys are used in a wide range of application such as electronics, automotive and building construction. Their various shapes are generally obtained by metal forming operation such as stamping. Therefore, it is important to characterize the material with adequate characterization tests. Sheet Bulging Test (SBT) is well recognized in the metal forming community. Different theoretical models of the literature for the evaluation of thickness and radius of the deformed sheet in SBT have been studied in order to get the hardening curve of different materials. These theoretical models present the advantage that the experimental procedure is very simple. But Koç et al. showed their limitation, since the combination of thickness and radius evaluations depend on the material. As Zinc alloys are strongly anisotropic with a special crystalline structure, a procedure is adopted for characterizing the hardening curve of a Zinc alloy. The anisotropy is first studied with tensile test, and SBT with elliptical dies is also investigated. Parallel to this, Digital Image Correlation (DIC) measures are carried out. The results obtained from theoretical models and DIC measures are compared. Measures done on post-mortem specimens complete the comparisons. Finally, DIC measures give better results and the resulting hardening curve of the studied zinc alloy is provided.

  2. Fluctuation correlation models for receptor immobilization

    Science.gov (United States)

    Fourcade, B.

    2017-12-01

    Nanoscale dynamics with cycles of receptor diffusion and immobilization by cell-external-or-internal factors is a key process in living cell adhesion phenomena at the origin of a plethora of signal transduction pathways. Motivated by modern correlation microscopy approaches, the receptor correlation functions in physical models based on diffusion-influenced reaction is studied. Using analytical and stochastic modeling, this paper focuses on the hybrid regime where diffusion and reaction are not truly separable. The time receptor autocorrelation functions are shown to be indexed by different time scales and their asymptotic expansions are given. Stochastic simulations show that this analysis can be extended to situations with a small number of molecules. It is also demonstrated that this analysis applies when receptor immobilization is coupled to environmental noise.

  3. Groundwater travel time uncertainty analysis: Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1984-12-01

    The deep basalt formations beneath the Hanford Site are being investigated for the Department of Energy (DOE) to assess their suitability as a host medium for a high level nuclear waste repository. Predicted performance of the proposed repository is an important part of the investigation. One of the performance measures being used to gauge the suitability of the host medium is pre-waste-emplacement groundwater travel times to the accessible environment. Many deterministic analyses of groundwater travel times have been completed by Rockwell and other independent organizations. Recently, Rockwell has completed a preliminary stochastic analysis of groundwater travel times. This document presents analyses that show the sensitivity of the results from the previous stochastic travel time study to: (1) scale of representation of model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross-correlation between transmissivity and effective thickness. 40 refs., 29 figs., 6 tabs

  4. Correlations between MRI and Information Processing Speed in MS: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    S. M. Rao

    2014-01-01

    Full Text Available Objectives. To examine relationships between conventional MRI measures and the paced auditory serial addition test (PASAT and symbol digit modalities test (SDMT. Methods. A systematic literature review was conducted. Included studies had ≥30 multiple sclerosis (MS patients, administered the SDMT or PASAT, and measured T2LV or brain atrophy. Meta-analysis of MRI/information processing speed (IPS correlations, analysis of MRI/IPS significance tests to account for reporting bias, and binomial testing to detect trends when comparing correlation strengths of SDMT versus PASAT and T2LV versus atrophy were conducted. Results. The 39 studies identified frequently reported only significant correlations, suggesting reporting bias. Direct meta-analysis was only feasible for correlations between SDMT and T2LV (r=-0.45, P<0.001 and atrophy in patients with mixed-MS subtypes (r=-0.54, P<0.001. Familywise Holm-Bonferroni testing found that selective reporting was not the source of at least half of significant results reported. Binomial tests (P=0.006 favored SDMT over PASAT in strength of MRI correlations. Conclusions. A moderate-to-strong correlation exists between impaired IPS and MRI in mixed MS populations. Correlations with MRI were stronger for SDMT than for PASAT. Neither heterogeneity among populations nor reporting bias appeared to be responsible for these findings.

  5. Correlation of spacecraft thermal mathematical models to reference data

    Science.gov (United States)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  6. Model tests and numerical analysis on restoring force characteristics of reactor buildings

    International Nuclear Information System (INIS)

    Uchiyama, Y.; Suzuki, S.; Akino, K.

    1987-01-01

    Seismic shear walls of nuclear reactor buildings are composed of cylindrical, truncated cone-shape, box-shape, irregular polygonal walls or its combination and they are generally heavily reinforced concrete (RC) walls. So the elasto-plastic behaviors of those RC structures in ultimate regions have many unsolved and may be considered as especially important factors for explaining nonlinear response of nuclear reactor buildings. Following these research demands, the authors have prepared a nonlinear F.E.M. code called ''SANREF'' and made an extensive study for the restoring force characteristics of the inner concrete structures (I/C) of a PWR-type containment vessel and the principal seismic shear walls of a BWR-type reactor building by some series of reduced model tests and simulation analysis for the tests results. The detailed objectives of this study can be summarized as follows: (1) Examine the effectiveness of the configurations of shear walls, reinforcement ratios, shear span ratios (M/Qd) and vertical axial stress by ''partial model test'' which simulates some independent shear walls of the PWR-type and BWR-type reactor buildings. (2) Obtain fundamental data of restoring force characteristics of the complex shaped RC structures by ''composite model test'' which models are composed of the partial model test specimens. (3) Verify the applicability of analytical methods and constitutive modelings in SANREF code for complex shaped RC structures through nonlinear simulation analysis for the composite model test

  7. Acute toxicity of metals and reference toxicants to a freshwater ostracod, Cypris subglobosa Sowerby, 1840 and correlation to EC50 values of other test models

    International Nuclear Information System (INIS)

    Khangarot, B.S.; Das, Sangita

    2009-01-01

    The ostracod Cypris subglobosa Sowerby, 1840 static bioassay test on the basis of a 48 h of 50% of immobilization (EC 50 ) has been used to measure the toxicity of 36 metals and metalloids and 12 reference toxicants. Among the 36 metals and metalloids, osmium (Os) was found to be the most toxic in the test while boron (B), the least toxic. The EC 50 values of this study revealed positive linear relationship with the established test models of cladoceran (Daphnia magna), sludge worm (Tubifex tubifex), chironomid larvae (Chironomus tentans), protozoan (Tetrahymena pyriformis), fathead minnow (Pimephales promelas), bluegill sunfish (Lepomis macrochirus), and aquatic macrophyte duckweed (Lemna minor). Correlation coefficients (r 2 ) for 17 physicochemical properties of metals or metal ions and EC 50 s (as pM) were examined by linear regression analysis. The electronegativity, ionization potential, melting point, solubility product of metal sulfides (pK sp ), softness parameter and some other physicochemical characteristics were significantly correlated with EC 50 s of metals to C. subglobosa. The reproducibility of toxicity test was determined using 12 reference toxicants. The coefficient of variability of the EC 50 s ranged from 6.95% to 55.37% and variability was comparable to that noticed for D. magna and other aquatic test models. The study demonstrated the need to include crustacean ostracods in a battery of biotests to detect the presence of hazardous chemicals in soils, sewage sludges, sediments and aquatic systems.

  8. Nonparametric correlation models for portfolio allocation

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Casas, Isabel

    2013-01-01

    This article proposes time-varying nonparametric and semiparametric estimators of the conditional cross-correlation matrix in the context of portfolio allocation. Simulations results show that the nonparametric and semiparametric models are best in DGPs with substantial variability or structural ...... currencies. Results show the nonparametric model generally dominates the others when evaluating in-sample. However, the semiparametric model is best for out-of-sample analysis....

  9. Hypothesis testing for differentially correlated features.

    Science.gov (United States)

    Sheng, Elisa; Witten, Daniela; Zhou, Xiao-Hua

    2016-10-01

    In a multivariate setting, we consider the task of identifying features whose correlations with the other features differ across conditions. Such correlation shifts may occur independently of mean shifts, or differences in the means of the individual features across conditions. Previous approaches for detecting correlation shifts consider features simultaneously, by computing a correlation-based test statistic for each feature. However, since correlations involve two features, such approaches do not lend themselves to identifying which feature is the culprit. In this article, we instead consider a serial testing approach, by comparing columns of the sample correlation matrix across two conditions, and removing one feature at a time. Our method provides a novel perspective and favorable empirical results compared with competing approaches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A model relating Eulerian spatial and temporal velocity correlations

    Science.gov (United States)

    Cholemari, Murali R.; Arakeri, Jaywant H.

    2006-03-01

    In this paper we propose a model to relate Eulerian spatial and temporal velocity autocorrelations in homogeneous, isotropic and stationary turbulence. We model the decorrelation as the eddies of various scales becoming decorrelated. This enables us to connect the spatial and temporal separations required for a certain decorrelation through the ‘eddy scale’. Given either the spatial or the temporal velocity correlation, we obtain the ‘eddy scale’ and the rate at which the decorrelation proceeds. This leads to a spatial separation from the temporal correlation and a temporal separation from the spatial correlation, at any given value of the correlation relating the two correlations. We test the model using experimental data from a stationary axisymmetric turbulent flow with homogeneity along the axis.

  11. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  12. Multifractal temporally weighted detrended cross-correlation analysis to quantify power-law cross-correlation and its application to stock markets

    Science.gov (United States)

    Wei, Yun-Lan; Yu, Zu-Guo; Zou, Hai-Long; Anh, Vo

    2017-06-01

    A new method—multifractal temporally weighted detrended cross-correlation analysis (MF-TWXDFA)—is proposed to investigate multifractal cross-correlations in this paper. This new method is based on multifractal temporally weighted detrended fluctuation analysis and multifractal cross-correlation analysis (MFCCA). An innovation of the method is applying geographically weighted regression to estimate local trends in the nonstationary time series. We also take into consideration the sign of the fluctuations in computing the corresponding detrended cross-covariance function. To test the performance of the MF-TWXDFA algorithm, we apply it and the MFCCA method on simulated and actual series. Numerical tests on artificially simulated series demonstrate that our method can accurately detect long-range cross-correlations for two simultaneously recorded series. To further show the utility of MF-TWXDFA, we apply it on time series from stock markets and find that power-law cross-correlation between stock returns is significantly multifractal. A new coefficient, MF-TWXDFA cross-correlation coefficient, is also defined to quantify the levels of cross-correlation between two time series.

  13. Analysis and test for space shuttle propellant dynamics (1/10th scale model test results). Volume 1: Technical discussion

    Science.gov (United States)

    Berry, R. L.; Tegart, J. R.; Demchak, L. J.

    1979-01-01

    Space shuttle propellant dynamics during ET/Orbiter separation in the RTLS (return to launch site) mission abort sequence were investigated in a test program conducted in the NASA KC-135 "Zero G" aircraft using a 1/10th-scale model of the ET LOX Tank. Low-g parabolas were flown from which thirty tests were selected for evaluation. Data on the nature of low-g propellant reorientation in the ET LOX tank, and measurements of the forces exerted on the tank by the moving propellent will provide a basis for correlation with an analytical model of the slosh phenomenon.

  14. Windowed Multitaper Correlation Analysis of Multimodal Brain Monitoring Parameters

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.

  15. Windowed multitaper correlation analysis of multimodal brain monitoring parameters.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.

  16. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  17. Multifractal detrended cross-correlation analysis in the MENA area

    Science.gov (United States)

    El Alaoui, Marwane; Benbachir, Saâd

    2013-12-01

    In this paper, we investigated multifractal cross-correlations qualitatively and quantitatively using a cross-correlation test and the Multifractal detrended cross-correlation analysis method (MF-DCCA) for markets in the MENA area. We used cross-correlation coefficients to measure the level of this correlation. The analysis concerns four stock market indices of Morocco, Tunisia, Egypt and Jordan. The countries chosen are signatory of the Agadir agreement concerning the establishment of a free trade area comprising Arab Mediterranean countries. We computed the bivariate generalized Hurst exponent, Rényi exponent and spectrum of singularity for each pair of indices to measure quantitatively the cross-correlations. By analyzing the results, we found the existence of multifractal cross-correlations between all of these markets. We compared the spectrum width of these indices; we also found which pair of indices has a strong multifractal cross-correlation.

  18. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  19. Pre-analysis techniques applied to area-based correlation aiming Digital Terrain Model generation

    Directory of Open Access Journals (Sweden)

    Maurício Galo

    2005-12-01

    Full Text Available Area-based matching is an useful procedure in some photogrammetric processes and its results are of crucial importance in applications such as relative orientation, phototriangulation and Digital Terrain Model generation. The successful determination of correspondence depends on radiometric and geometric factors. Considering these aspects, the use of procedures that previously estimate the quality of the parameters to be computed is a relevant issue. This paper describes these procedures and it is shown that the quality prediction can be computed before performing matching by correlation, trough the analysis of the reference window. This procedure can be incorporated in the correspondence process for Digital Terrain Model generation and Phototriangulation. The proposed approach comprises the estimation of the variance matrix of the translations from the gray levels in the reference window and the reduction of the search space using the knowledge of the epipolar geometry. As a consequence, the correlation process becomes more reliable, avoiding the application of matching procedures in doubtful areas. Some experiments with simulated and real data are presented, evidencing the efficiency of the studied strategy.

  20. Acute toxicity of metals and reference toxicants to a freshwater ostracod, Cypris subglobosa Sowerby, 1840 and correlation to EC{sub 50} values of other test models

    Energy Technology Data Exchange (ETDEWEB)

    Khangarot, B.S., E-mail: bkhangarot@hotmail.com [Ecotoxicology Division, Indian Institute of Toxicology Research (Formerly: Industrial Toxicology Research Centre), Post Box No. 80, Mahatma Gandhi Marg, Lucknow 226001 (India); Das, Sangita [Ecotoxicology Division, Indian Institute of Toxicology Research (Formerly: Industrial Toxicology Research Centre), Post Box No. 80, Mahatma Gandhi Marg, Lucknow 226001 (India)

    2009-12-30

    The ostracod Cypris subglobosa Sowerby, 1840 static bioassay test on the basis of a 48 h of 50% of immobilization (EC{sub 50}) has been used to measure the toxicity of 36 metals and metalloids and 12 reference toxicants. Among the 36 metals and metalloids, osmium (Os) was found to be the most toxic in the test while boron (B), the least toxic. The EC{sub 50} values of this study revealed positive linear relationship with the established test models of cladoceran (Daphnia magna), sludge worm (Tubifex tubifex), chironomid larvae (Chironomus tentans), protozoan (Tetrahymena pyriformis), fathead minnow (Pimephales promelas), bluegill sunfish (Lepomis macrochirus), and aquatic macrophyte duckweed (Lemna minor). Correlation coefficients (r{sup 2}) for 17 physicochemical properties of metals or metal ions and EC{sub 50}s (as pM) were examined by linear regression analysis. The electronegativity, ionization potential, melting point, solubility product of metal sulfides (pK{sub sp}), softness parameter and some other physicochemical characteristics were significantly correlated with EC{sub 50}s of metals to C. subglobosa. The reproducibility of toxicity test was determined using 12 reference toxicants. The coefficient of variability of the EC{sub 50}s ranged from 6.95% to 55.37% and variability was comparable to that noticed for D. magna and other aquatic test models. The study demonstrated the need to include crustacean ostracods in a battery of biotests to detect the presence of hazardous chemicals in soils, sewage sludges, sediments and aquatic systems.

  1. Psychological determinants of erectile dysfunction: testing a cognitive-emotional model.

    Science.gov (United States)

    Nobre, Pedro J

    2010-04-01

    Recent studies have shown the impact of sexual dysfunctional beliefs, negative cognitive schemas, negative automatic thoughts, and depressed affect on male erectile dysfunction. Despite this fact, there are only few conceptual models that try to integrate these findings, and more importantly, there is a lack of studies that test the validity of those conceptual models. The aim of the present article was to test a cognitive-emotional model for erectile dysfunction. Taking previous research findings into account, we developed a cognitive-emotional model for erectile disorder (ED) and used path analysis to test it. A total of 352 men (303 participants from the general population and 49 participants with a DSM-IV diagnosis of sexual dysfunction) answered a set of questionnaires assessing cognitive and emotional variables. Erectile Function measured by the EF subscale of the International Index of Erectile Function, cognitive schemas measured by the Questionnaire of Cognitive Schema Activation in Sexual Context, sexual beliefs measured by the Sexual Dysfunctional Beliefs Questionnaire, thoughts and emotions measured by the Sexual Modes Questionnaire. The effects of the main proposed direct predictors explained 55% of the erectile function variance (R = 0.74). Most remaining direct effects proposed in the model were also statistically significant. The analysis of the absolute residuals showed that most of the implied correlations were close to the observed zero order correlations, indicated the adjustment of the model to the observed data. These findings support the role played by cognitive and emotional factors on the predisposition and maintenance of male erectile dysfunction and suggest important implications for assessment and treatment of ED.

  2. Frequency of intron loss correlates with processed pseudogene abundance: a novel strategy to test the reverse transcriptase model of intron loss.

    Science.gov (United States)

    Zhu, Tao; Niu, Deng-Ke

    2013-03-05

    Although intron loss in evolution has been described, the mechanism involved is still unclear. Three models have been proposed, the reverse transcriptase (RT) model, genomic deletion model and double-strand-break repair model. The RT model, also termed mRNA-mediated intron loss, suggests that cDNA molecules reverse transcribed from spliced mRNA recombine with genomic DNA causing intron loss. Many studies have attempted to test this model based on its predictions, such as simultaneous loss of adjacent introns, 3'-side bias of intron loss, and germline expression of intron-lost genes. Evidence either supporting or opposing the model has been reported. The mechanism of intron loss proposed in the RT model shares the process of reverse transcription with the formation of processed pseudogenes. If the RT model is correct, genes that have produced more processed pseudogenes are more likely to undergo intron loss. In the present study, we observed that the frequency of intron loss is correlated with processed pseudogene abundance by analyzing a new dataset of intron loss obtained in mice and rats. Furthermore, we found that mRNA molecules of intron-lost genes are mostly translated on free cytoplasmic ribosomes, a feature shared by mRNA molecules of the parental genes of processed pseudogenes and long interspersed elements. This feature is likely convenient for intron-lost gene mRNA molecules to be reverse transcribed. Analyses of adjacent intron loss, 3'-side bias of intron loss, and germline expression of intron-lost genes also support the RT model. Compared with previous evidence, the correlation between the abundance of processed pseudogenes and intron loss frequency more directly supports the RT model of intron loss. Exploring such a correlation is a new strategy to test the RT model in organisms with abundant processed pseudogenes.

  3. Round Robin Posttest analysis of a 1/10-scale Steel Containment Vessel Model Test

    International Nuclear Information System (INIS)

    Komine, Kuniaki; Konno, Mutsuo

    1999-01-01

    NUPEC and U.S. Nuclear Regulatory Commission (USNRC) have been jointly sponsoring 'Structural Behavior Test' at Sandia National Laboratory (SNL) in Cooperative Containment Research Program'. As one of the test, a test of a mixed scaled SCV model with 1/10 in the geometry and 1/4 in the shell thickness. Round Robin analyses of a 1/10-scale Steel Containment Vessel (SCV) Model Test were carried out to obtain an adequate analytical method among seven organizations belonged to five countries in the world. As one of sponsor, Nuclear Power Engineering Corporation (NUPEC) filled the important role of a posttest analysis of SCV model. This paper describes NUPEC's analytical results in the round robin posttest analysis. (author)

  4. Deducing Electronic Unit Internal Response During a Vibration Test Using a Lumped Parameter Modeling Approach

    Science.gov (United States)

    Van Dyke, Michael B.

    2014-01-01

    During random vibration testing of electronic boxes there is often a desire to know the dynamic response of certain internal printed wiring boards (PWBs) for the purpose of monitoring the response of sensitive hardware or for post-test forensic analysis in support of anomaly investigation. Due to restrictions on internally mounted accelerometers for most flight hardware there is usually no means to empirically observe the internal dynamics of the unit, so one must resort to crude and highly uncertain approximations. One common practice is to apply Miles Equation, which does not account for the coupled response of the board in the chassis, resulting in significant over- or under-prediction. This paper explores the application of simple multiple-degree-of-freedom lumped parameter modeling to predict the coupled random vibration response of the PWBs in their fundamental modes of vibration. A simple tool using this approach could be used during or following a random vibration test to interpret vibration test data from a single external chassis measurement to deduce internal board dynamics by means of a rapid correlation analysis. Such a tool might also be useful in early design stages as a supplemental analysis to a more detailed finite element analysis to quickly prototype and analyze the dynamics of various design iterations. After developing the theoretical basis, a lumped parameter modeling approach is applied to an electronic unit for which both external and internal test vibration response measurements are available for direct comparison. Reasonable correlation of the results demonstrates the potential viability of such an approach. Further development of the preliminary approach presented in this paper will involve correlation with detailed finite element models and additional relevant test data.

  5. Correlation length estimation in a polycrystalline material model

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.

    2005-01-01

    This paper deals with the correlation length estimated from a mesoscopic model of a polycrystalline material. The correlation length can be used in some macroscopic material models as a material parameter that describes the internal length. It can be estimated directly from the strain and stress fields calculated from a finite-element model, which explicitly accounts for the selected mesoscopic features such as the random orientation, shape and size of the grains. A crystal plasticity material model was applied in the finite-element analysis. Different correlation lengths were obtained depending on the used set of crystallographic orientations. We determined that the different sets of crystallographic orientations affect the general level of the correlation length, however, as the external load is increased the behaviour of correlation length is similar in all the analyzed cases. The correlation lengths also changed with the macroscopic load. If the load is below the yield strength the correlation lengths are constant, and are slightly higher than the average grain size. The correlation length can therefore be considered as an indicator of first plastic deformations in the material. Increasing the load above the yield strength creates shear bands that temporarily increase the values of the correlation lengths calculated from the strain fields. With a further load increase the correlation lengths decrease slightly but stay above the average grain size. (author)

  6. Statistical tests for power-law cross-correlated processes

    Science.gov (United States)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  7. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    Science.gov (United States)

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  8. Development of the critical thickness correlation for an improvement of MARS code dryout model

    International Nuclear Information System (INIS)

    Jeon, J. H.; Lee, W. J.; Lee, E. C.

    2002-01-01

    The mechanical film dryout analysis defines that the critical heat flux arises when liquid film calculation from evaporation, droplet entrainment and deposition gets dryout. The dryout of film is generally assumed when film thickness becomes zero. However, it was proven that the complete dryout assumption can estimate CHF well for uniform heating case but can not simulate accurately for non-uniform heating case. The critical thickness concept is an appropriate approach physically because there is a possibility of instantaneous disappearance of liquid film when it gets very thin. Therefore, the dryout phenomenon was modeled introducing critical thickness concept and development of proper critical thickness correlation. In this study, MARS code and some steady state dryout experimental data were used to develop a critical thickness correlation. The version including new critical thickness correlation was assessed using the several dryout CHF tests of various test conditions including non uniform heating case and flow reduction transient test and the results showed enhanced agreement with the experimental data

  9. Reliability of Computerized Neurocognitive Tests for Concussion Assessment: A Meta-Analysis.

    Science.gov (United States)

    Farnsworth, James L; Dargo, Lucas; Ragan, Brian G; Kang, Minsoo

    2017-09-01

      Although widely used, computerized neurocognitive tests (CNTs) have been criticized because of low reliability and poor sensitivity. A systematic review was published summarizing the reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) scores; however, this was limited to a single CNT. Expansion of the previous review to include additional CNTs and a meta-analysis is needed. Therefore, our purpose was to analyze reliability data for CNTs using meta-analysis and examine moderating factors that may influence reliability.   A systematic literature search (key terms: reliability, computerized neurocognitive test, concussion) of electronic databases (MEDLINE, PubMed, Google Scholar, and SPORTDiscus) was conducted to identify relevant studies.   Studies were included if they met all of the following criteria: used a test-retest design, involved at least 1 CNT, provided sufficient statistical data to allow for effect-size calculation, and were published in English.   Two independent reviewers investigated each article to assess inclusion criteria. Eighteen studies involving 2674 participants were retained. Intraclass correlation coefficients were extracted to calculate effect sizes and determine overall reliability. The Fisher Z transformation adjusted for sampling error associated with averaging correlations. Moderator analyses were conducted to evaluate the effects of the length of the test-retest interval, intraclass correlation coefficient model selection, participant demographics, and study design on reliability. Heterogeneity was evaluated using the Cochran Q statistic.   The proportion of acceptable outcomes was greatest for the Axon Sports CogState Test (75%) and lowest for the ImPACT (25%). Moderator analyses indicated that the type of intraclass correlation coefficient model used significantly influenced effect-size estimates, accounting for 17% of the variation in reliability.   The Axon Sports CogState Test, which

  10. Correlation with liver scintigram, reticuloendothelial function test, plasma endotoxin level and liver function tests in chronic liver diseases. Multivariate analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ohmoto, Kenji; Yamamoto, Shinichi; Ideguchi, Seiji and others

    1989-02-01

    Liver scintigrams with Tc-99m phytate were reviewed in a total of 64 consecutive patients, comprising 28 with chronic hepatitis and 36 with liver cirrhosis. Reticuloendothelial (RES) function, plasma endotoxin (Et) levels and findings of general liver function tests were used as reference parameters to determine the diagnostic ability of liver scintigraphy. Multivariate analyses revealed that liver scintigrams had a strong correlation with RES function and Et levels in terms of morphology of the liver and hepatic and bone marrow Tc-99m uptake. General liver function tests revealed gamma globulin to be correlated with hepatic uptake and the degree of splenogemaly on liver scintigrams; and ICG levels at 15 min to be correlated with bone marrow and splenic uptake. Accuracy of liver scintigraphy was 73% for chronic hepatitis, which was inferior to general liver function tests (83%). When both modalities were combined, diangostic accuracy increased to 95%. Liver scintigraphy seems to be useful as a complementary approach. (Namekawa, K).

  11. Round Robin Posttest analysis of a 1/10-scale Steel Containment Vessel Model Test

    Energy Technology Data Exchange (ETDEWEB)

    Komine, Kuniaki [Nuclear Power Engineering Corp., Tokyo (Japan); Konno, Mutsuo

    1999-07-01

    NUPEC and U.S. Nuclear Regulatory Commission (USNRC) have been jointly sponsoring 'Structural Behavior Test' at Sandia National Laboratory (SNL) in Cooperative Containment Research Program'. As one of the test, a test of a mixed scaled SCV model with 1/10 in the geometry and 1/4 in the shell thickness. Round Robin analyses of a 1/10-scale Steel Containment Vessel (SCV) Model Test were carried out to obtain an adequate analytical method among seven organizations belonged to five countries in the world. As one of sponsor, Nuclear Power Engineering Corporation (NUPEC) filled the important role of a posttest analysis of SCV model. This paper describes NUPEC's analytical results in the round robin posttest analysis. (author)

  12. IFCI 7.0 Models and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Reed, A.W.; Schmidt, R.C.; Young, M.F.

    1999-05-01

    The Integrated Fuel-Coolant Interaction Code (IFCI) is a best-estimate computer program for analysis of phenomena related to mixing of molten nuclear reactor core material with reactor coolant (water). The stand-alone version of the code, IFCI 7.0, has been designed for analysis of small- and intermediate-scale experiments in order to gain insight into the physics (including scaling effects) of molten fuel-coolant interactions. The code's methods, models, and correlations are being assessed. This report describes the flow regime, friction factor, and heat-transfer models used in the current version of IFCI (IFCI 7.0).

  13. IFCI 7.0 Models and Correlations

    International Nuclear Information System (INIS)

    Reed, A.W.; Schmidt, R.C.; Young, M.F.

    1999-01-01

    The Integrated Fuel-Coolant Interaction Code (IFCI) is a best-estimate computer program for analysis of phenomena related to mixing of molten nuclear reactor core material with reactor coolant (water). The stand-alone version of the code, IFCI 7.0, has been designed for analysis of small- and intermediate-scale experiments in order to gain insight into the physics (including scaling effects) of molten fuel-coolant interactions. The code's methods, models, and correlations are being assessed. This report describes the flow regime, friction factor, and heat-transfer models used in the current version of IFCI (IFCI 7.0)

  14. Kinetic analysis of dynamic 18F-fluoromisonidazole PET correlates with radiation treatment outcome in head-and-neck cancer

    Directory of Open Access Journals (Sweden)

    Paulsen Frank

    2005-12-01

    Full Text Available Abstract Background Hypoxia compromises local control in patients with head-and-neck cancer (HNC. In order to determine the value of [18F]-fluoromisonidazole (Fmiso with regard to tumor hypoxia, a patient study with dynamic Fmiso PET was performed. For a better understanding of tracer uptake and distribution, a kinetic model was developed to analyze dynamic Fmiso PET data. Methods For 15 HNC patients, dynamic Fmiso PET examinations were performed prior to radiotherapy (RT treatment. The data was analyzed using a two compartment model, which allows the determination of characteristic hypoxia and perfusion values. For different parameters, such as patient age, tumor size and standardized uptake value, the correlation to treatment outcome was tested using the Wilcoxon-Mann-Whitney U-test. Statistical tests were also performed for hypoxia and perfusion parameters determined by the kinetic model and for two different metrics based on these parameters. Results The kinetic Fmiso analysis extracts local hypoxia and perfusion characteristics of a tumor tissue. These parameters are independent quantities. In this study, different types of characteristic hypoxia-perfusion patterns in tumors could be identified. The clinical verification of the results, obtained on the basis of the kinetic analysis, showed a high correlation of hypoxia-perfusion patterns and RT treatment outcome (p = 0.001 for this initial patient group. Conclusion The presented study established, that Fmiso PET scans may benefit from dynamic acquisition and analysis by a kinetic model. The pattern of distribution of perfusion and hypoxia in the tissue is correlated to local control in HNC.

  15. Seismic Response Analysis and Test of 1/8 Scale Model for a Spent Fuel Storage Cask

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Han; Park, C. G.; Koo, G. H.; Seo, G. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yeom, S. H. [Chungnam Univ., Daejeon (Korea, Republic of); Choi, B. I.; Cho, Y. D. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2005-07-15

    The seismic response tests of a spent fuel dry storage cask model of 1/8 scale are performed for an typical 1940 El-centro and Kobe earthquakes. This report firstly focuses on the data generation by seismic response tests of a free standing storage cask model to check the overturing possibility of a storage cask and the slipping displacement on concrete slab bed. The variations in seismic load magnitude and cask/bed interface friction are considered in tests. The test results show that the model gives an overturning response for an extreme condition only. A FEM model is built for the test model of 1/8 scale spent fuel dry storage cask using available 3D contact conditions in ABAQUS/Explicit. Input load for this analysis is El-centro earthquake, and the friction coefficients are obtained from the test result. Penalty and kinematic contact methods of ABAQUS are used for a mechanical contact formulation. The analysis methods was verified with the rocking angle obtained by seismic response tests. The kinematic contact method with an adequate normal contact stiffness showed a good agreement with tests. Based on the established analysis method for 1/8 scale model, the seismic response analyses of a full scale model are performed for design and beyond design seismic loads.

  16. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  17. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  18. Failure analysis based on microvoid growth for sheet metal during uniaxial and biaxial tensile tests

    International Nuclear Information System (INIS)

    Abbassi, Fethi; Mistou, Sebastien; Zghal, Ali

    2013-01-01

    Highlights: ► Cruciform specimen designed and biaxial tensile test carried out. ► Stereo Correlation Image technique is used for 3D full-filed measurements. ► SEM fractography analysis is used to explain the fracture mechanism. ► Constitutive modeling of the necking phenomenon was developed using GTN model. - Abstract: The aim of the presented investigations is to perform an analysis of fracture and instability during simple and complex load testing by addressing the influence of ductile damage evolution in necking processes. In this context, an improved experimental methodology was developed and successfully used to evaluate localization of deformation during uniaxial and biaxial tensile tests. The biaxial tensile tests are carried out using cruciform specimen loaded using a biaxial testing machine. In this experimental investigation, Stereo-Image Correlation technique has is used to produce the heterogeneous deformations map within the specimen surface. Scanning electron microscope is used to evaluate the fracture mechanism and the micro-voids growth. A finite element model of uniaxial and biaxial tensile tests are developed, where a ductile damage model Gurson–Tvergaard–Needleman (GTN) is used to describe material deformation involving damage evolution. Comparison between the experimental and the simulation results show the accuracy of the finite element model to predict the instability phenomenon. The advanced measurement techniques contribute to understand better the ductile fracture mechanism

  19. On the problem of nonsense correlations in allergological tests after routine extraction.

    Science.gov (United States)

    Rijckaert, G

    1981-01-01

    The influence of extraction procedures and culturing methods of material used for the preparation of allergenic extracts on correlation patterns found in allergological testing (skin test and RAST) was investigated. In our laboratory a short extraction procedure performed at O degrees C was used for Aspergillus repens. A. penicilloides, Wallemia sebi, their rearing media and non-inoculated medium. For the commercially available extracts from house dust, house-dust mite, pollen of Dactylus glomerata and A. penicilloides a longer procedure (several days) performed at room temperature was used. Statistical analysis showed a separation of all test results into two clusters, each cluster being composed of correlations between extracts from only one the manufacturers did not show any correlation. The correlations found between the short time incubated extracts of the xerophilic fungi and their rearing media could be explained by genetical and biochemical relationships between these fungi depending on ecological conditions. However, while the correlation found between house dust and house-dust mite is understandable, correlations found between long time incubated extracts from house-dust mite and D. glomerata or A. penicilloides may be nonsense correlations, that do not adequately describe the in vivo situation. The similarity of these extracts is presumably artificially created during extraction.

  20. Fuselage Versus Subcomponent Panel Response Correlation Based on ABAQUS Explicit Progressive Damage Analysis Tools

    Science.gov (United States)

    Gould, Kevin E.; Satyanarayana, Arunkumar; Bogert, Philip B.

    2016-01-01

    Analysis performed in this study substantiates the need for high fidelity vehicle level progressive damage analyses (PDA) structural models for use in the verification and validation of proposed sub-scale structural models and to support required full-scale vehicle level testing. PDA results are presented that capture and correlate the responses of sub-scale 3-stringer and 7-stringer panel models and an idealized 8-ft diameter fuselage model, which provides a vehicle level environment for the 7-stringer sub-scale panel model. Two unique skin-stringer attachment assumptions are considered and correlated in the models analyzed: the TIE constraint interface versus the cohesive element (COH3D8) interface. Evaluating different interfaces allows for assessing a range of predicted damage modes, including delamination and crack propagation responses. Damage models considered in this study are the ABAQUS built-in Hashin procedure and the COmplete STress Reduction (COSTR) damage procedure implemented through a VUMAT user subroutine using the ABAQUS/Explicit code.

  1. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    International Nuclear Information System (INIS)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables

  2. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  3. Correlates of STI testing among vocational school students in the Netherlands

    Directory of Open Access Journals (Sweden)

    Mackenbach Johan P

    2010-11-01

    Full Text Available Abstract Background Adolescents are at risk for acquiring sexually transmitted infections (STIs. However, test rates among adolescents in the Netherlands are low and effective interventions that encourage STI testing are scarce. Adolescents who attend vocational schools are particularly at risk for STI. The purpose of this study is to inform the development of motivational health promotion messages by identifying the psychosocial correlates of STI testing intention among adolescents with sexual experience attending vocational schools. Methods This study was conducted among 501 students attending vocational schools aged 16 to 25 years (mean 18.3 years ± 2.1. Data were collected via a web-based survey exploring relationships, sexual behavior and STI testing behavior. Items measuring the psychosocial correlates of testing were derived from Fishbein's Integrative Model. Data were subjected to multiple regression analyses. Results Students reported substantial sexual risk behavior and low intention to participate in STI testing. The model explained 39% of intention to engage in STI testing. The most important predictor was attitude. Perceived norms, perceived susceptibility and test site characteristics were also significant predictors. Conclusions The present study provides important and relevant empirical input for the development of health promotion interventions aimed at motivating adolescents at vocational schools in the Netherlands to participate in STI testing. Health promotion interventions developed for this group should aim to change attitudes, address social norms and increase personal risk perception for STI while also promoting the accessibility of testing facilities.

  4. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  5. Statistical correlation of structural mode shapes from test measurements and NASTRAN analytical values

    Science.gov (United States)

    Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.

    1983-01-01

    The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.

  6. Dynamical analysis of a PWR internals using super-elements in an integrated 3-D model model. Part 2: dynamical tests and seismic analysis

    International Nuclear Information System (INIS)

    Jesus Miranda, C.A. de.

    1992-01-01

    The results of the test analysis (frequencies) for the isolated super-elements and for the developed 3-D model of the internals core support structures of a PWR research reactor are presented. Once certified of the model effectiveness for this type of analysis the seismic spectral analysis was performed. From the results can be seen that the structures are rigid for this load, isolated or together with the other in the 3-D model, and there are no impacts among them during the earthquake (OBE). (author)

  7. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  8. TACT: A Set of MSC/PATRAN- and MSC/NASTRAN- based Modal Correlation Tools

    Science.gov (United States)

    Marlowe, Jill M.; Dixon, Genevieve D.

    1998-01-01

    This paper describes the functionality and demonstrates the utility of the Test Analysis Correlation Tools (TACT), a suite of MSC/PATRAN Command Language (PCL) tools which automate the process of correlating finite element models to modal survey test data. The initial release of TACT provides a basic yet complete set of tools for performing correlation totally inside the PATRAN/NASTRAN environment. Features include a step-by-step menu structure, pre-test accelerometer set evaluation and selection, analysis and test result export/import in Universal File Format, calculation of frequency percent difference and cross-orthogonality correlation results using NASTRAN, creation and manipulation of mode pairs, and five different ways of viewing synchronized animations of analysis and test modal results. For the PATRAN-based analyst, TACT eliminates the repetitive, time-consuming and error-prone steps associated with transferring finite element data to a third-party modal correlation package, which allows the analyst to spend more time on the more challenging task of model updating. The usefulness of this software is presented using a case history, the correlation for a NASA Langley Research Center (LaRC) low aspect ratio research wind tunnel model. To demonstrate the improvements that TACT offers the MSC/PATRAN- and MSC/DIASTRAN- based structural analysis community, a comparison of the modal correlation process using TACT within PATRAN versus external third-party modal correlation packages is presented.

  9. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  10. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution

    OpenAIRE

    Han, Fang; Liu, Han

    2016-01-01

    Correlation matrices play a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, it is not an effective estimator when facing heavy-tailed distributions. As a robust alternative, Han and Liu [J. Am. Stat. Assoc. 109 (2015) 275-2...

  11. Development of local TDC model in core thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Kwon, H.S.; Park, J.R.; Hwang, D.H.; Lee, S.K.

    2004-01-01

    The local TDC model consisting of natural mixing and forced mixing part was developed to obtain more realistic local fluid properties in the core subchannel analysis. To evaluate the performance of local TDC model, the CHF prediction capability was tested with the various CHF correlations and local fluid properties at CHF location which are based on the local TDC model. The results show that the standard deviation of measured to predicted CHF ratio (M/P) based on local TDC model can be reduced by about 7% compared to those based on global TDC model when the CHF correlation has no term to account for distance from the spacer grid. (author)

  12. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  13. TU-G-204-06: Correlation Between Texture Analysis-Based Model Observer and Human Observer in Diagnosis of Ischemic Infarct in Non-Contrast Head CT of Adults

    International Nuclear Information System (INIS)

    Li, B; Fujita, A; Buch, K; Sakai, O

    2015-01-01

    Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIs over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast

  14. TU-G-204-06: Correlation Between Texture Analysis-Based Model Observer and Human Observer in Diagnosis of Ischemic Infarct in Non-Contrast Head CT of Adults

    Energy Technology Data Exchange (ETDEWEB)

    Li, B; Fujita, A; Buch, K; Sakai, O [Boston University Medical Center, Boston, MA (United States)

    2015-06-15

    Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIs over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast

  15. Multi-model Analysis of Diffusion-weighted Imaging of Normal Testes at 3.0 T: Preliminary Findings.

    Science.gov (United States)

    Min, Xiangde; Feng, Zhaoyan; Wang, Liang; Cai, Jie; Li, Basen; Ke, Zan; Zhang, Peipei; You, Huijuan; Yan, Xu

    2018-04-01

    This study aimed to establish diffusion quantitative parameters (apparent diffusion coefficient [ADC], DDC, α, D app , and K app ) in normal testes at 3.0 T. Sixty-four healthy volunteers in two age groups (A: 10-39 years; B: ≥ 40 years) underwent diffusion-weighted imaging scanning at 3.0 T. ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app were calculated using the mono-exponential, stretched-exponential, and kurtosis models. The correlations between parameters and the age were analyzed. The parameters were compared between the age groups and between the right and the left testes. The average ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app values did not significantly differ between the right and the left testes (P > .05 for all). The following significant correlations were found: positive correlations between age and testicular ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app (r = 0.516, 0.518, 0.518, 0.521, and 0.516, respectively; P < .01 for all) and negative correlations between age and testicular α and K app (r = -0.363, -0.427, respectively; P < .01 for both). Compared to group B, in group A, ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app were significantly lower (P < .05 for all), but α and K app were significantly higher (P < .05 for both). Our study demonstrated the applicability of the testicular mono-exponential, stretched-exponential, and kurtosis models. Our results can help establish a baseline for the normal testicular parameters in these diffusion models. The contralateral normal testis can serve as a suitable reference for evaluating the abnormalities of the other side. The effect of age on these parameters requires further attention. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  16. International Space Station Future Correlation Analysis Improvements

    Science.gov (United States)

    Laible, Michael R.; Pinnamaneni, Murthy; Sugavanam, Sujatha; Grygier, Michael

    2018-01-01

    Ongoing modal analyses and model correlation are performed on different configurations of the International Space Station (ISS). These analyses utilize on-orbit dynamic measurements collected using four main ISS instrumentation systems: External Wireless Instrumentation System (EWIS), Internal Wireless Instrumentation System (IWIS), Space Acceleration Measurement System (SAMS), and Structural Dynamic Measurement System (SDMS). Remote Sensor Units (RSUs) are network relay stations that acquire flight data from sensors. Measured data is stored in the Remote Sensor Unit (RSU) until it receives a command to download data via RF to the Network Control Unit (NCU). Since each RSU has its own clock, it is necessary to synchronize measurements before analysis. Imprecise synchronization impacts analysis results. A study was performed to evaluate three different synchronization techniques: (i) measurements visually aligned to analytical time-response data using model comparison, (ii) Frequency Domain Decomposition (FDD), and (iii) lag from cross-correlation to align measurements. This paper presents the results of this study.

  17. Dynamic analysis of the PEC fast reactor vessel: on-site tests and mathematical models

    International Nuclear Information System (INIS)

    Zola, Maurizio; Martelli, Alessandro; Maresca, Giuseppe; Masoni, Paolo; Scandola, Giani; Descleves, Pierre

    1988-01-01

    This paper presents the main features and results of the on-site dynamic tests and the related numerical analysis carried out for the PEC reactor vessel. The purpose is to provide an example of on-site testing of large components, stressing the problems encountered during the experiments, as well as in the processing phase of the test results and for the comparisons between calculations and measurements. Tests, performed by ISMES on behalf of ENEA, allowed the dynamic response of the empty vessel to be measured, thus providing data for the verification of the numerical models of the vessel supporting structure adopted in the PEC reactor-block seismic analysis. An axisymmetric model of the vessel, implemented in the vessel, implemented in the NOVAK code, had been developed in the framework of the detailed numerical studies performed by NOVATOME (again on behalf of ENEA), to check the beam schematization with fluid added mass model adopted by ANSALDO in SAP-IV and ANSYS for the reactor-block design calculations. Furthermore, a numerical model, describing vessel supporting structure in detail, was also developed by ANSALDO and implemented in the SAP-IV code. The test conditions were analysed by use of these and the design models. Comparisons between calculations and measurements showed particularly good agreement with regard to first natural frequency of the vessel and rocking stiffness of the vessel supporting structure, i.e. those parameters on which vessel seismic amplification mainly depends: this demonstrated the adequacy of the design analysis to correctly calculate the seismic motion at the PEC core diagrid. (author)

  18. THE MISHKIN TEST: AN ANALYSIS OF MODEL EXTENSIONS

    Directory of Open Access Journals (Sweden)

    Diana MURESAN

    2015-04-01

    Full Text Available This paper reviews empirical research that apply Mishkin test for the examination of the existence of accruals anomaly using alternative approaches. Mishkin test is a test used in macro-econometrics for rational hypothesis, which test for the market efficiency. Starting with Sloan (1996 the model has been applied to accruals anomaly literature. Since Sloan (1996, the model has known various improvements and it has been the subject to many debates in the literature regarding its efficacy. Nevertheless, the current evidence strengthens the pervasiveness of the model. The analyses realized on the extended studies on Mishkin test highlights that adding additional variables enhances the results, providing insightful information about the occurrence of accruals anomaly.

  19. A neural network model for estimating soil phosphorus using terrain analysis

    Directory of Open Access Journals (Sweden)

    Ali Keshavarzi

    2015-12-01

    Full Text Available Artificial neural network (ANN model was developed and tested for estimating soil phosphorus (P in Kouhin watershed area (1000 ha, Qazvin province, Iran using terrain analysis. Based on the soil distribution correlation, vegetation growth pattern across the topographically heterogeneous landscape, the topographic and vegetation attributes were used in addition to pedologic information for the development of ANN model in area for estimating of soil phosphorus. Totally, 85 samples were collected and tested for phosphorus contents and corresponding attributes were estimated by the digital elevation model (DEM. In order to develop the pedo-transfer functions, data linearity was checked, correlated and 80% was used for modeling and ANN was tested using 20% of collected data. Results indicate that 68% of the variation in soil phosphorus could be explained by elevation and Band 1 data and significant correlation was observed between input variables and phosphorus contents. There was a significant correlation between soil P and terrain attributes which can be used to derive the pedo-transfer function for soil P estimation to manage nutrient deficiency. Results showed that P values can be calculated more accurately with the ANN-based pedo-transfer function with the input topographic variables along with the Band 1.

  20. Separate effects tests for GOTHIC condensation and evaporative heat transfer models

    International Nuclear Information System (INIS)

    George, T.L.; Singh, A.

    1994-01-01

    The GOTHIC computer program, under development at EPRI/NAI, is a general purpose thermal hydraulics computer program for design, licensing, safety and operating analysis of nuclear containments and other confinement buildings. The code solves a nine equation model for three dimensional multiphase flow with separate mass, momentum and energy equations for vapor, liquid and drop phases. The vapor phase can be a gas mixture of steam and non-condensing gases. The phase balance equations are coupled by mechanistic and empirical models for interface mass, energy and momentum transfer that cover the entire flow regime from bubbly flow to film/drop flow. A variety of heat transfer correlations are available to model the fluid coupling to active and passive solid conductors. This paper focuses on the application of GOTHIC to two separate effects tests; condensation heat transfer on a vertical flat plate with varying bulk velocity, steam concentration and temperature, and evaporative heat transfer from a hot pool to a dry (superheated) atmosphere. Comparisons with experimental data is included for both tests. Results show the validity of two condensation heat transfer correlations as incorporated into GOTHIC and the interfacial heat and mass transfer models for the range of the experimental test conditions. Comparisons are also made for lumped versus multidimensional modeling for buoyancy controlled flow with evaporative heat transfer. (author). 13 refs., 1 tab., 10 figs

  1. Separate effects tests for GOTHIC condensation and evaporative heat transfer models

    International Nuclear Information System (INIS)

    George, T.L.; Singh, A.

    1996-01-01

    The GOTHIC computer program, under development at NAI for EPRI, is a general purpose thermal hydraulics computer program for design, licensing, safety and operating analysis of nuclear containments and other confinement buildings. The code solves a nine-equation model for three-dimensional multiphase flow with separate mass, momentum and energy equations for vapor, liquid and drop phases. The vapor phase can be a gas mixture of steam and non-condensing gases. The phase balance equations are coupled by mechanistic and empirical models for interface mass, energy and momentum transfer that cover the entire flow regime from bubbly flow to film-drop flow. A variety of heat transfer correlations are available to model the fluid coupling to active and passive solid conductors. This paper focuses on the application of GOTHIC to two separate effects tests: condensation heat transfer on a vertical flat plate with varying bulk velocity, steam concentration and temperature, and evaporative heat transfer from a hot pool to a dry (superheated) atmosphere. Comparisons with experimental data are included for both tests. Results show the validity of two condensation heat transfer correlations as incorporated into GOTHIC and the interfacial heat and mass transfer models for the range of the experimental test conditions. Comparisons are also made for lumped vs. multidimensional modeling for buoyancy-controlled flow with evaporative heat transfer. (orig.)

  2. [A multilevel model analysis of correlation between population characteristics and work ability of employees].

    Science.gov (United States)

    Zhang, Lei; Huang, Chunping; Lan, Yajia; Wang, Mianzhen

    2015-12-01

    To analyze the correlation between population characteristics and work ability of employees with a multilevel model, to investigate the important influencing factors for work ability, and to provide a basis for improvement in work ability. Work ability index (WAI)was applied to measure the work ability of 1686 subjects from different companies (n=6). MLwi N2.0 software was applied for two-level variance component model fitting. The WAI of employees showed differences between various companies (χ2=3.378 6, P=0.0660); working years was negatively correlated with WAI (χ2=38.229 2, P=0.0001), and the WAI of the employees with 20 or more working years was 1.63 lower than that of the employees with less than 20 working years; the work ability of manual workers was lower than that of mental-manual workers (χ2=8.2726, P=0.0040), and the work ability showed no significant difference between mental workers and mental-manual workers (χ2=2.086 0, P=0.148 7). From the perspective of probability, the multilevel model analysis reveals the differences in work ability of employees between different companies, and suggests that company, work type, and working years are the important influencing factors for work ability of employees. These factors should be improved and adjusted to protect or enhance the work ability of employees.

  3. Automated analysis of pumping tests; Analise automatizada de testes de bombeamento

    Energy Technology Data Exchange (ETDEWEB)

    Sugahara, Luiz Alberto Nozaki

    1996-01-01

    An automated procedure for analysis of pumping test data performed in groundwater wells is described. A computer software was developed to be used under the Windows operational system. The software allows the choice of 3 mathematical models for representing the aquifer behavior, which are: Confined aquifer (Theis model); Leaky aquifer (Hantush model); unconfined aquifer (Boulton model). The analysis of pumping test data using the proper aquifer model, allows for the determination of the model parameters such as transmissivity, storage coefficient, leakage coefficient and delay index. The computer program can be used for the analysis of data obtained from both pumping tests, with one or more pumping rates, and recovery tests. In the multiple rate case, a de superposition procedure has been implemented in order to obtain the equivalent aquifer response for the first flow rate, which is used in obtaining an initial estimate of the model parameters. Such initial estimate is required in the non-linear regression analysis method. The solutions to the partial differential equations describing the aquifer behavior were obtained in Laplace space, followed by numerical inversion of the transformed solution using the Stehfest algorithm. The data analysis procedure is based on a non-linear regression method by matching the field data to the theoretical response of a selected aquifer model, for a given type of test. A least squared regression analysis method was implemented using either Gauss-Newton or Levenberg-Marquardt procedures for minimization of a objective function. The computer software can also be applied to multiple rate test data in order to determine the non-linear well coefficient, allowing for the computation of the well inflow performance curve. (author)

  4. Functional Multiple-Set Canonical Correlation Analysis

    Science.gov (United States)

    Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.

    2012-01-01

    We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…

  5. Sparse canonical correlation analysis: new formulation and algorithm.

    Science.gov (United States)

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-12-01

    In this paper, we study canonical correlation analysis (CCA), which is a powerful tool in multivariate data analysis for finding the correlation between two sets of multidimensional variables. The main contributions of the paper are: 1) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem, 2) to obtain the explicit characterization for all solutions of the multiple CCA problem even when the corresponding covariance matrices are singular, 3) to develop a new sparse CCA algorithm, and 4) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real-world datasets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms.

  6. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  7. CROSS-CORRELATION MODELLING OF SURFACE WATER – GROUNDWATER INTERACTION USING THE EXCEL SPREADSHEET APPLICATION

    Directory of Open Access Journals (Sweden)

    Kristijan Posavec

    2017-01-01

    Full Text Available Modelling responses of groundwater levels in aquifer systems, which occur as a reaction to changes in aquifer system boundary conditions such as river or stream stages, is commonly being studied using statistical methods, namely correlation, cross-correlation and regression methods. Although correlation and regression analysis tools are readily available in Microsoft Excel, a widely applied spreadsheet industry standard, the cross-correlation analysis tool is missing. As a part of research of groundwater pressure propagation into alluvial aquifer systems of the Sava and Drava/Danube River catchments following river stages rise, focused on estimating groundwater pressure travel times in aquifers, an Excel spreadsheet data analysis application for cross-correlation modelling has been designed and used in modelling surface water – groundwater interaction. Examples of fi eld data from the Zagreb aquifer system and the Kopački rit Nature Park aquifer system are used to illustrate the usefulness of the cross-correlation application.

  8. The electron antineutrino angular correlation coefficient a in free neutron decay. Testing the standard model with the aSPECT-spectrometer

    International Nuclear Information System (INIS)

    Borg, Michael

    2011-01-01

    The β-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient a. A first test period (2005/2006) showed the ''proof-of-principles''. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient a (published in 2008). A second measurement cycle (2007/2008) aimed to under-run the relative accuracy of previous experiments (δa)/(a)=5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to (δa (syst.) )/(a)=0.61 %. The statistical accuracy of the analyzed measurements is (δa (stat.) )/(a)∼1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects is discussed in the last chapter. (orig.)

  9. Interpreting canonical correlation analysis through biplots of stucture correlations and weights

    NARCIS (Netherlands)

    Braak, ter C.J.F.

    1990-01-01

    This paper extends the biplot technique to canonical correlation analysis and redundancy analysis. The plot of structure correlations is shown to the optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate

  10. Methods and Models of Market Risk Stress-Testing of the Portfolio of Financial Instruments

    Directory of Open Access Journals (Sweden)

    Alexander M. Karminsky

    2015-01-01

    Full Text Available Amid instability of financial markets and macroeconomic situation the necessity of improving bank risk-management instrument arises. New economic reality defines the need for searching for more advanced approaches of estimating banks vulnerability to exceptional, but plausible events. Stress-testing belongs to such instruments. The paper reviews and compares the models of market risk stress-testing of the portfolio of different financial instruments. These days the topic of the paper is highly acute due to the fact that now stress-testing is becoming an integral part of anticrisis risk-management amid macroeconomic instability and appearance of new risks together with close interest to the problem of risk-aggregation. The paper outlines the notion of stress-testing and gives coverage of goals, functions of stress-tests and main criteria for market risk stress-testing classification. The paper also stresses special aspects of scenario analysis. Novelty of the research is explained by elaborating the programme of aggregated complex multifactor stress-testing of the portfolio risk based on scenario analysis. The paper highlights modern Russian and foreign models of stress-testing both on solo-basis and complex. The paper lays emphasis on the results of stress-testing and revaluations of positions for all three complex models: methodology of the Central Bank of stress-testing portfolio risk, model relying on correlations analysis and copula model. The models of stress-testing on solo-basis are different for each financial instrument. Parametric StressVaR model is applicable to shares and options stress-testing;model based on "Grek" indicators is used for options; for euroobligation regional factor model is used. Finally some theoretical recommendations about managing market risk of the portfolio are given.

  11. K-correlation power spectral density and surface scatter model

    Science.gov (United States)

    Dittman, Michael G.

    2006-08-01

    The K-Correlation or ABC model for surface power spectral density (PSD) and BRDF has been around for years. Eugene Church and John Stover, in particular, have published descriptions of its use in describing smooth surfaces. The model has, however, remained underused in the optical analysis community partially due to the lack of a clear summary tailored toward that application. This paper provides the K-Correlation PSD normalized to σ(λ) and BRDF normalized to TIS(σ,λ) in a format intended to be used by stray light analysts. It is hoped that this paper will promote use of the model by analysts and its incorporation as a standard tool into stray light modeling software.

  12. Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis

    International Nuclear Information System (INIS)

    Wang Shijun; Yao Jianhua; Liu Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.

    2009-01-01

    Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.

  13. Correlation models for waste tank sludges and slurries

    International Nuclear Information System (INIS)

    Mahoney, L.A.; Trent, D.S.

    1995-07-01

    This report presents the results of work conducted to support the TEMPEST computer modeling under the Flammable Gas Program (FGP) and to further the comprehension of the physical processes occurring in the Hanford waste tanks. The end products of this task are correlation models (sets of algorithms) that can be added to the TEMPEST computer code to improve the reliability of its simulation of the physical processes that occur in Hanford tanks. The correlation models can be used to augment, not only the TEMPEST code, but other computer codes that can simulate sludge motion and flammable gas retention. This report presents the correlation models, also termed submodels, that have been developed to date. The submodel-development process is an ongoing effort designed to increase our understanding of sludge behavior and improve our ability to realistically simulate the sludge fluid characteristics that have an impact on safety analysis. The effort has employed both literature searches and data correlation to provide an encyclopedia of tank waste properties in forms that are relatively easy to use in modeling waste behavior. These properties submodels will be used in other tasks to simulate waste behavior in the tanks. Density, viscosity, yield strength, surface tension, heat capacity, thermal conductivity, salt solubility, and ammonia and water vapor pressures were compiled for solutions and suspensions of sodium nitrate and other salts (where data were available), and the data were correlated by linear regression. In addition, data for simulated Hanford waste tank supernatant were correlated to provide density, solubility, surface tension, and vapor pressure submodels for multi-component solutions containing sodium hydroxide, sodium nitrate, sodium nitrite, and sodium aluminate

  14. Gray correlation analysis and prediction models of living refuse generation in Shanghai city.

    Science.gov (United States)

    Liu, Gousheng; Yu, Jianguo

    2007-01-01

    A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.

  15. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    Science.gov (United States)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  16. Photogrammetric analysis of rubble mound breakwaters scale model tests

    Directory of Open Access Journals (Sweden)

    João Rodrigues

    2016-09-01

    Full Text Available The main goal of this paper is to develop a photogrammetric method in order to obtain arobust tool for damage assessment and quantification of rubble-mound armour layers during physicalscale model tests. With the present work, an innovative approach based on a reduced number ofdigital photos is proposed to support the identification of affected areas. This work considers twosimple digital photographs recording the instants before and after the completion of the physicaltest. Mathematical techniques were considered in the development of the procedures, enabling thetracking of image differences between photos. The procedures were developed using an open-sourceapplication, Scilab, nevertheless they are not platform dependent. The procedures developed enablethe location and identity of eroded areas in the breakwater armour layer, as well as the possibilityof quantifying them. This ability is confirmed through the calculation of correlation coefficients ineach step of the search for the more damaged area. It is also possible to make an assessment of themovement of armour layer units.

  17. Sequential accelerated tests: Improving the correlation of accelerated tests to module performance in the field

    Science.gov (United States)

    Felder, Thomas; Gambogi, William; Stika, Katherine; Yu, Bao-Ling; Bradley, Alex; Hu, Hongjie; Garreau-Iles, Lucie; Trout, T. John

    2016-09-01

    DuPont has been working steadily to develop accelerated backsheet tests that correlate with solar panels observations in the field. This report updates efforts in sequential testing. Single exposure tests are more commonly used and can be completed more quickly, and certain tests provide helpful predictions of certain backsheet failure modes. DuPont recommendations for single exposure tests are based on 25-year exposure levels for UV and humidity/temperature, and form a good basis for sequential test development. We recommend a sequential exposure of damp heat followed by UV then repetitions of thermal cycling and UVA. This sequence preserves 25-year exposure levels for humidity/temperature and UV, and correlates well with a large body of field observations. Measurements can be taken at intervals in the test, although the full test runs 10 months. A second, shorter sequential test based on damp heat and thermal cycling tests mechanical durability and correlates with loss of mechanical properties seen in the field. Ongoing work is directed toward shorter sequential tests that preserve good correlation to field data.

  18. Wind tunnel test IA300 analysis and results, volume 1

    Science.gov (United States)

    Kelley, P. B.; Beaufait, W. B.; Kitchens, L. L.; Pace, J. P.

    1987-01-01

    The analysis and interpretation of wind tunnel pressure data from the Space Shuttle wind tunnel test IA300 are presented. The primary objective of the test was to determine the effects of the Space Shuttle Main Engine (SSME) and the Solid Rocket Booster (SRB) plumes on the integrated vehicle forebody pressure distributions, the elevon hinge moments, and wing loads. The results of this test will be combined with flight test results to form a new data base to be employed in the IVBC-3 airloads analysis. A secondary objective was to obtain solid plume data for correlation with the results of gaseous plume tests. Data from the power level portion was used in conjunction with flight base pressures to evaluate nominal power levels to be used during the investigation of changes in model attitude, eleveon deflection, and nozzle gimbal angle. The plume induced aerodynamic loads were developed for the Space Shuttle bases and forebody areas. A computer code was developed to integrate the pressure data. Using simplified geometrical models of the Space Shuttle elements and components, the pressure data were integrated to develop plume induced force and moments coefficients that can be combined with a power-off data base to develop a power-on data base.

  19. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  20. Spectral analysis by correlation

    International Nuclear Information System (INIS)

    Fauque, J.M.; Berthier, D.; Max, J.; Bonnet, G.

    1969-01-01

    The spectral density of a signal, which represents its power distribution along the frequency axis, is a function which is of great importance, finding many uses in all fields concerned with the processing of the signal (process identification, vibrational analysis, etc...). Amongst all the possible methods for calculating this function, the correlation method (correlation function calculation + Fourier transformation) is the most promising, mainly because of its simplicity and of the results it yields. The study carried out here will lead to the construction of an apparatus which, coupled with a correlator, will constitute a set of equipment for spectral analysis in real time covering the frequency range 0 to 5 MHz. (author) [fr

  1. Testing the simplex assumption underlying the Sport Motivation Scale: a structural equation modeling analysis.

    Science.gov (United States)

    Li, F; Harmer, P

    1996-12-01

    Self-determination theory (Deci & Ryan, 1985) suggests that motivational orientation or regulatory styles with respect to various behaviors can be conceptualized along a continuum ranging from low (a motivation) to high (intrinsic motivation) levels of self-determination. This pattern is manifested in the rank order of correlations among these regulatory styles (i.e., adjacent correlations are expected to be higher than those more distant) and is known as a simplex structure. Using responses from the Sport Motivation Scale (Pelletier et al., 1995) obtained from a sample of 857 college students (442 men, 415 women), the present study tested the simplex structure underlying SMS subscales via structural equation modeling. Results confirmed the simplex model structure, indicating that the various motivational constructs are empirically organized from low to high self-determination. The simplex pattern was further found to be invariant across gender. Findings from this study support the construct validity of the SMS and have important implications for studies focusing on the influence of motivational orientation in sport.

  2. New design procedure development of future reactor critical power estimation. (1) Practical design-by-analysis method for BWR critical power design correlation

    International Nuclear Information System (INIS)

    Yamamoto, Yasushi; Mitsutake, Toru

    2007-01-01

    For present BWR fuels, the full mock-up thermal-hydraulic test, such as the critical power measurement test, pressure drop measurement test and so on, has been needed. However, the full mock-up test required the high costs and large-scale test facility. At present, there are only a few test facilities to perform the full mock-up thermal-hydraulic test in the world. Moreover, for future BWR, the bundle size tends to be larger, because of reducing the plant construction costs and minimizing the routine check period. For instance, AB1600, improved ABWR, was proposed from Toshiba, whose bundle size was 1.2 times larger than the conventional BWR fuel size. It is too expensive and far from realistic to perform the full mock-up thermal-hydraulic test for such a large size fuel bundle. The new design procedure is required to realize the large scale bundle design development, especially for the future reactor. Therefore, the new design procedure, Practical Design-by-Analysis (PDBA) method, has been developed. This new procedure consists of the partial mock-up test and numerical analysis. At present, the subchannel analysis method based on three-fluid two-phase flow model only is a realistic choice. Firstly, the partial mock-up test is performed, for instance, the 1/4 partial mock-up bundle. Then, the first-step critical power correlation coefficients are evaluated with the measured data. The input data, such as the spacer effect model coefficient, on the subchannel analysis are also estimated with the data. Next, the radial power effect on the critical power of the full-bundle size was estimated with the subchannel analysis. Finally, the critical power correlation is modified by the subchannel analysis results. In the present study, the critical power correlation of the conventional 8x8 BWR fuel was developed with the PDBA method by 4x4 partial mock-up tests and the subchannel analysis code. The accuracy of the estimated critical power was 3.8%. The several themes remain to

  3. Application of Multilevel Models to Morphometric Data. Part 2. Correlations

    Directory of Open Access Journals (Sweden)

    O. Tsybrovskyy

    2003-01-01

    Full Text Available Multilevel organization of morphometric data (cells are “nested” within patients requires special methods for studying correlations between karyometric features. The most distinct feature of these methods is that separate correlation (covariance matrices are produced for every level in the hierarchy. In karyometric research, the cell‐level (i.e., within‐tumor correlations seem to be of major interest. Beside their biological importance, these correlation coefficients (CC are compulsory when dimensionality reduction is required. Using MLwiN, a dedicated program for multilevel modeling, we show how to use multivariate multilevel models (MMM to obtain and interpret CC in each of the levels. A comparison with two usual, “single‐level” statistics shows that MMM represent the only way to obtain correct cell‐level correlation coefficients. The summary statistics method (take average values across each patient produces patient‐level CC only, and the “pooling” method (merge all cells together and ignore patients as units of analysis yields incorrect CC at all. We conclude that multilevel modeling is an indispensable tool for studying correlations between morphometric variables.

  4. CORRELATIONS BETWEEN FINDINGS OF OCCLUSAL AND MANUAL ANALYSIS IN TMD-PATIENTS

    Directory of Open Access Journals (Sweden)

    Mariana Dimova

    2016-08-01

    Full Text Available The aim of this study was to investigate and analyze the possible correlations between findings by manual functional analysis and clinical occlusal analysis in TMD-patients. Material and methods: Material of this study are 111 TMD-patients selected after visual diagnostics, functional brief review under Ahlers Jakstatt, intraoral examination and taking periodontal status. In the period September 2014 - March 2016 all patients were subjected to manual functional analysis and clinical occlusal analysis. 17 people (10 women and 7 men underwent imaging with cone-beam computed tomography. Results: There were found many statistically significant correlations between tests of the structural analysis that indicate the relationships between findings. Conclusion: The presence of statistically significant correlations between occlusal relationships, freedom in the centric and condition of the muscle complex of masticatory system and TMJ confirm the relationship between the state of occlusal components and TMD.

  5. An efficient sensitivity analysis method for modified geometry of Macpherson suspension based on Pearson correlation coefficient

    Science.gov (United States)

    Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh

    2017-06-01

    The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.

  6. Digital speckle correlation for nondestructive testing of corrosion

    Science.gov (United States)

    Paiva, Raul D., Jr.; Soga, Diogo; Muramatsu, Mikiya; Hogert, Elsa N.; Landau, Monica R.; Ruiz Gale, Maria F.; Gaggioli, Nestor G.

    1999-07-01

    This paper describes the use of optical correlation speckle patterns to detect and analyze the metallic corrosion phenomena, and shows the experimental set-up used. We present some new results in the characterization of the corrosion process using a model based in electroerosion phenomena. We also provide valuable information about surface microrelief changes, which is also useful in numerous engineering applications. The results obtained are good enough for showing that our technique is very useful for giving new possibilities to the analysis of the corrosion and oxidation process, particularly in real time.

  7. Canonical correlation analysis of professional stress,social support,and professional burnout among low-rank army officers

    Directory of Open Access Journals (Sweden)

    Chuan-yun LI

    2011-12-01

    Full Text Available Objective The present study investigates the influence of professional stress and social support on professional burnout among low-rank army officers.Methods The professional stress,social support,and professional burnout scales among low-rank army officers were used as test tools.Moreover,the officers of established units(battalion,company,and platoon were chosen as test subjects.Out of the 260 scales sent,226 effective scales were received.The descriptive statistic and canonical correlation analysis models were used to analyze the influence of each variable.Results The scores of low-rank army officers in the professional stress,social support,and professional burnout scales were more than average,except on two factors,namely,interpersonal support and de-individualization.The canonical analysis identified three groups of canonical correlation factors,of which two were up to a significant level(P < 0.001.After further eliminating the social support variable,the canonical correlation analysis of professional stress and burnout showed that the canonical correlation coefficients P corresponding to 1 and 2 were 0.62 and 0.36,respectively,and were up to a very significant level(P < 0.001.Conclusion The low-rank army officers experience higher professional stress and burnout levels,showing a lower sense of accomplishment,emotional exhaustion,and more serious depersonalization.However,social support can reduce the onset and seriousness of professional burnout among these officers by lessening pressure factors,such as career development,work features,salary conditions,and other personal factors.

  8. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  9. QSPR modeling of octanol/water partition coefficient of antineoplastic agents by balance of correlations.

    Science.gov (United States)

    Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio

    2010-04-01

    Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.

  10. Correlation between liver function tests and metabolic syndrome in hepatitis-free elderly

    Directory of Open Access Journals (Sweden)

    Hung-Sheng Shang

    2015-01-01

    Full Text Available Background: We aimed to investigate the relationship between liver function tests (LFTs and metabolic syndrome (MetS as several studies have shown positive correlations between some of the LFTs, including alanine aminotransferase (ALT and γ-glutamyl transpeptidase (γ-GT, and MetS but have not fully explored the same in the elderly. Owing to the progress in public health, the aging of the general population becomes a major issue. Design: We enrolled subjects aged over 60 years who underwent routine health checkups in a Health Screening Center after excluding subjects with a history of hepatitis B or C infection, excessive alcohol consumption, liver fibrosis, cirrhosis, acute hepatitis, diabetes, hypertension, dyslipidemia, cardiovascular disease, or receiving medications for these diseases. Finally, 9,282 participants were eligible for analysis. Statistical Analysis: All data were tested for normal distribution with the Kolmogorov-Smirnov test and for homogeneity of variances with the Levene′s test. A t-test was used to evaluate the differences between the two groups. Univariate and multivariate regressions were used to observe correlations between different parameters. Receiver operating characteristic curves of each LFT were used to predict MetS. Areas under curves and 95% confidence interval were also estimated and compared. Results: With the exception of aspartate aminotransferase and α-fetal protein, the results of LFTs, including total and direct bilirubin, alkaline phosphatase (ALP, ALT, and γ-GT, were altered in the group with MetS. Furthermore, the levels of γ-GT in men and ALP in women were independently associated with all MetS components and had the highest areas under receiver operating characteristic curves. Conclusion: Abnormal LFTs are highly correlated with MetS in the hepatitis-free elderly, with levels of γ-GT in men and ALP in women being the most important factors. LFTs may represent an auxiliary tool for the

  11. Dynamic analysis of the PEC fast reactor vessel: On-site tests and mathematical models

    International Nuclear Information System (INIS)

    Zola, M.; Martelli, A.; Masoni, P.; Scandola, G.

    1988-01-01

    This paper presents the main features and results of the on-site dynamic tests and the related numerical analyses carried out for the PEC reactor vessel. The purpose is to provide an example of on-site testing of large components, stressing the problems encountered during the experiments, as well as in the processing phase of the test results and for the comparisons between calculations and measurements. Tests, performed by ISMES on behalf of ENEA, allowed the dynamic response of the empty vessel to be measured, thus providing data for the verification of the numerical models of the vessel supporting structure adopted in the PEC reactor-block seismic analysis. An axisymmetric model of the vessel, implemented in the NOVAX code, had been developed in the framework of the detailed numerical studies performed by NOVATOME (again on behalf of ENEA), to check the beam schematization with fluid added mass model adopted by ANSALDO in SAP-IV and ANSYS for the reactor-block design calculations. Furthermore, a numerical model, describing vessel supporting structure in detail, was also developed by ANSALDO and implemented in the SAP-IV code. The test conditions were analysed by use of these and the design models. Comparisons between calculations and measurements showed particularly good agreement with regard to first natural frequency of the vessel and rocking stiffness of the vessel supporting structure, i.e. those parameters on which vessel seismic amplification mainly depends: this demonstrated the adequacy of the design analysis to correctly calculate the seismic motion at the PEC core diagrid. (author). 5 refs, 23 figs, 4 tabs

  12. A Test of the Need Hierarchy Concept by a Markov Model of Change in Need Strength.

    Science.gov (United States)

    Rauschenberger, John; And Others

    1980-01-01

    In this study of 547 high school graduates, Alderfer's and Maslow's need hierarchy theories were expressed in Markov chain form and were subjected to empirical test. Both models were disconfirmed. Corroborative multiwave correlational analysis also failed to support the need hierarchy concept. (Author/IRT)

  13. A model for C-14 tracer evaporative rate analysis (ERA)

    International Nuclear Information System (INIS)

    Gardner, R.P.; Verghese, K.

    1993-01-01

    A simple model has been derived and tested for the C-14 tracer evaporative rate analysis (ERA) method. It allows the accurate determination of the evaporative rate coefficient of the C-14 tracer detector in the presence of variable evaporation rates of the detector solvent and variable background counting rates. The evaporation rate coefficient should be the most fundamental parameter available in this analysis method and, therefore, its measurements with the proposed model should allow the most direct correlations to be made with the system properties of interest such as surface cleanliness. (author)

  14. Testing Group Mean Differences of Latent Variables in Multilevel Data Using Multiple-Group Multilevel CFA and Multilevel MIMIC Modeling.

    Science.gov (United States)

    Kim, Eun Sook; Cao, Chunhua

    2015-01-01

    Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.

  15. Correlation analysis between ceramic insulator pollution and acoustic emissions

    Directory of Open Access Journals (Sweden)

    Benjamín Álvarez-Nasrallah

    2015-01-01

    Full Text Available Most of the studies related to insulator pollution are normally performed based on individual analysis among leakage current, relative humidity and equivalent salt deposit density (ESDD. This paper presents a correlation analysis between the leakage current and the acoustic emissions measured in a 230 kV electrical substations in the city of Barranquilla, Colombia. Furthermore, atmospheric variables were considered to develop a characterization model of the insulator contamination process. This model was used to demonstrate that noise emission levels are a reliable indicator to detect and characterize pollution on high voltage insulators. The correlation found amount the atmospheric, electrical and sound variables allowed to determine the relations for the maintenance of ceramic insulators in high-polluted areas. In this article, the results on the behavior of the leakage current in ceramic insulators and the sound produced with different atmospheric conditions are shown, which allow evaluating the best time to clean the insulator at the substation. Furthermore, by experimentation on site and using statistical models, the correlation between ambient variables and the leakage current of insulators in an electrical substation was obtained. Some of the problems that bring the external noise were overcome using multiple microphones and specialized software that enabled properly filter the sound and better measure the variables.

  16. Psychological Correlates of University Students' Academic Performance: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Richardson, Michelle; Abraham, Charles; Bond, Rod

    2012-01-01

    A review of 13 years of research into antecedents of university students' grade point average (GPA) scores generated the following: a comprehensive, conceptual map of known correlates of tertiary GPA; assessment of the magnitude of average, weighted correlations with GPA; and tests of multivariate models of GPA correlates within and across…

  17. Correlates of emotional congruence with children in sexual offenders against children: a test of theoretical models in an incarcerated sample.

    Science.gov (United States)

    McPhail, Ian V; Hermann, Chantal A; Fernandez, Yolanda M

    2014-02-01

    Emotional congruence with children is a psychological construct theoretically involved in the etiology and maintenance of sexual offending against children. Research conducted to date has not examined the relationship between emotional congruence with children and other psychological meaningful risk factors for sexual offending against children. The current study derived potential correlates of emotional congruence with children from the published literature and proposed three models of emotional congruence with children that contain relatively unique sets of correlates: the blockage, sexual deviance, and psychological immaturity models. Using Area under the Curve analysis, we assessed the relationship between emotional congruence with children and offense characteristics, victim demographics, and psychologically meaningful risk factors in a sample of incarcerated sexual offenders against children (n=221). The sexual deviance model received the most support: emotional congruence with children was significantly associated with deviant sexual interests, sexual self-regulation problems, and cognition that condones and supports child molestation. The blockage model received partial support, and the immaturity model received the least support. Based on the results, we propose a set of further predictions regarding the relationships between emotional congruence with children and other psychologically meaningful risk factors to be examined in future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    Science.gov (United States)

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  19. Pile foundation response in liquefiable soil deposit during strong earthquakes. ; Centrifugal test for pile foundation model and correlation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, Y.; Miura, K. (Kajima Corp., Tokyo (Japan)); Scott, R.; Hushmand, B. (California Inst. of Technology, California, CA (United States))

    1992-09-30

    For the purpose of studying the pile foundation response in liquefiable soil deposit during earthquakes, a centrifugal loading system is employed which can reproduce the stress conditions of the soil in the actual ground, and earthquake wave vibration tests are performed in dry and saturated sand layers using a pile foundation model equipped with 4 piles. In addition, the result of the tests is analyzed by simulation using an analytic method for which effective stress is taken into consideration to investigate the effectiveness of this analytical model. It is clarified from the result of the experiments that the bending moment of the pile and the response characteristics of the foundation in the pile foundation response in saturated sand are greatly affected by the longer period of acceleration wave form of the ground and the increase in the ground displacement due to excess pore water pressure buildup. It is shown that the analytical model of the pile foundation/ground system is appropriate, and that this analytical method is effective in evaluating the seismic response of the pile foundation in nonlinear liquefiable soil. 23 refs., 21 figs., 3 tabs.

  20. Data analytics using canonical correlation analysis and Monte Carlo simulation

    Science.gov (United States)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  1. Group sparse canonical correlation analysis for genomic data integration.

    Science.gov (United States)

    Lin, Dongdong; Zhang, Jigang; Li, Jingyao; Calhoun, Vince D; Deng, Hong-Wen; Wang, Yu-Ping

    2013-08-12

    The emergence of high-throughput genomic datasets from different sources and platforms (e.g., gene expression, single nucleotide polymorphisms (SNP), and copy number variation (CNV)) has greatly enhanced our understandings of the interplay of these genomic factors as well as their influences on the complex diseases. It is challenging to explore the relationship between these different types of genomic data sets. In this paper, we focus on a multivariate statistical method, canonical correlation analysis (CCA) method for this problem. Conventional CCA method does not work effectively if the number of data samples is significantly less than that of biomarkers, which is a typical case for genomic data (e.g., SNPs). Sparse CCA (sCCA) methods were introduced to overcome such difficulty, mostly using penalizations with l-1 norm (CCA-l1) or the combination of l-1and l-2 norm (CCA-elastic net). However, they overlook the structural or group effect within genomic data in the analysis, which often exist and are important (e.g., SNPs spanning a gene interact and work together as a group). We propose a new group sparse CCA method (CCA-sparse group) along with an effective numerical algorithm to study the mutual relationship between two different types of genomic data (i.e., SNP and gene expression). We then extend the model to a more general formulation that can include the existing sCCA models. We apply the model to feature/variable selection from two data sets and compare our group sparse CCA method with existing sCCA methods on both simulation and two real datasets (human gliomas data and NCI60 data). We use a graphical representation of the samples with a pair of canonical variates to demonstrate the discriminating characteristic of the selected features. Pathway analysis is further performed for biological interpretation of those features. The CCA-sparse group method incorporates group effects of features into the correlation analysis while performs individual feature

  2. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part II. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies

  3. Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis

    Science.gov (United States)

    Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony

    2009-01-01

    Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.

  4. A modeling study of dynamic characteristic analysis of isolated structure for seismic exciting tests

    International Nuclear Information System (INIS)

    Lee, Jae Han; Koo, G. H.; Yoo, Bong

    1998-04-01

    The fundamental frequency of the isolated superstructure for seismic exciting tests was calculated by 16 Hz with a initial modal analysis model. but the actual modal test resulted in 5 Hz. This large difference was resulted from some uncertainties in analysis modeling of several connection parts between column and upper beam, cross bars of each face of the isolated superstructure. When the stiffness of cross-bars are larger than certain level in all the analyses, the joint stiffness between main slab and columns does not effect to the fundamental frequency. So the fundamental frequency of the isolated superstructure was governed by the cross-bar's stiffness. In actual tests the first and second frequencies show a little difference regardless of the cross section characteristics (inertia moments) of four columns because the joint stiffness between column and main slab is less than 10 8 1b f in/radian. The mounting device of each column to main slab, and the bolting device of each column to upper beam are fabricated with lower stiffness compared to design value. The bolting of cross-bars and the fitness of bolt-hole to bolt were loosed during the modal tests. In the future the tight connecting and the precise assembling of isolated superstructure are required to reduce the difference of the fundamental frequencies obtained from the modal analysis and actual test. (author). 4 refs

  5. Signal correlations in biomass combustion. An information theoretic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruusunen, M.

    2013-09-01

    Increasing environmental and economic awareness are driving the development of combustion technologies to efficient biomass use and clean burning. To accomplish these goals, quantitative information about combustion variables is needed. However, for small-scale combustion units the existing monitoring methods are often expensive or complex. This study aimed to quantify correlations between flue gas temperatures and combustion variables, namely typical emission components, heat output, and efficiency. For this, data acquired from four small-scale combustion units and a large circulating fluidised bed boiler was studied. The fuel range varied from wood logs, wood chips, and wood pellets to biomass residue. Original signals and a defined set of their mathematical transformations were applied to data analysis. In order to evaluate the strength of the correlations, a multivariate distance measure based on information theory was derived. The analysis further assessed time-varying signal correlations and relative time delays. Ranking of the analysis results was based on the distance measure. The uniformity of the correlations in the different data sets was studied by comparing the 10-quantiles of the measured signal. The method was validated with two benchmark data sets. The flue gas temperatures and the combustion variables measured carried similar information. The strongest correlations were mainly linear with the transformed signal combinations and explicable by the combustion theory. Remarkably, the results showed uniformity of the correlations across the data sets with several signal transformations. This was also indicated by simulations using a linear model with constant structure to monitor carbon dioxide in flue gas. Acceptable performance was observed according to three validation criteria used to quantify modelling error in each data set. In general, the findings demonstrate that the presented signal transformations enable real-time approximation of the studied

  6. Cross-correlation analysis of Ge/Li/ spectra

    International Nuclear Information System (INIS)

    MacDonald, R.; Robertson, A.; Kennett, T.J.; Prestwich, W.V.

    1974-01-01

    A sensitive technique is proposed for activation analysis using cross-correlation and improved spectral orthogonality achieved through use of a rectangular zero area digital filter. To test the accuracy and reliability of the cross-correlation procedure five spectra obtained with a Ge/Li detector were combined in different proportions. Gaussian distributed statistics were then added to the composite spectra by means of a pseudo-random number generator. The basis spectra used were 76 As, 82 Br, 72 Ga, 77 Ge, and room background. In general, when the basis spectra were combined in roughly comparable proportions the accuracy of the techique proved to be excelent (>1%). However, of primary importance was the ability of the correlation technique to identify low intensity components in the presence of high intensity components. It was found that the detection threshold for Ge, for example, was not reached until the Ge content in the unfiltered spectrum was <0.16%. (T.G.)

  7. Testing quantum mechanics using third-order correlations

    International Nuclear Information System (INIS)

    Kinsler, P.

    1996-01-01

    Semiclassical theories similar to stochastic electrodynamics are widely used in optics. The distinguishing feature of such theories is that the quantum uncertainty is represented by random statistical fluctuations. They can successfully predict some quantum-mechanical phenomena; for example, the squeezing of the quantum uncertainty in the parametric oscillator. However, since such theories are not equivalent to quantum mechanics, they will not always be useful. Complex number representations can be used to exactly model the quantum uncertainty, but care has to be taken that approximations do not reduce the description to a hidden variable one. This paper helps show the limitations of open-quote open-quote semiclassical theories,close-quote close-quote and helps show where a true quantum-mechanical treatment needs to be used. Third-order correlations are a test that provides a clear distinction between quantum and hidden variable theories in a way analogous to that provided by the open-quote open-quote all or nothing close-quote close-quote Greenberger-Horne-Zeilinger test of local hidden variable theories. copyright 1996 The American Physical Society

  8. Cohesive Zone Model Based Numerical Analysis of Steel-Concrete Composite Structure Push-Out Tests

    Directory of Open Access Journals (Sweden)

    J. P. Lin

    2014-01-01

    Full Text Available Push-out tests were widely used to determine the shear bearing capacity and shear stiffness of shear connectors in steel-concrete composite structures. The finite element method was one efficient alternative to push-out testing. This paper focused on a simulation analysis of the interface between concrete slabs and steel girder flanges as well as the interface of the shear connectors and the surrounding concrete. A cohesive zone model was used to simulate the tangential sliding and normal separation of the interfaces. Then, a zero-thickness cohesive element was implemented via the user-defined element subroutine UEL in the software ABAQUS, and a multiple broken line mode was used to define the constitutive relations of the cohesive zone. A three-dimensional numerical analysis model was established for push-out testing to analyze the load-displacement curves of the push-out test process, interface relative displacement, and interface stress distribution. This method was found to accurately calculate the shear capacity and shear stiffness of shear connectors. The numerical results showed that the multiple broken lines mode cohesive zone model could describe the nonlinear mechanical behavior of the interface between steel and concrete and that a discontinuous deformation numerical simulation could be implemented.

  9. Two-dimensional horizontal model seismic test and analysis for HTGR core

    International Nuclear Information System (INIS)

    Ikushima, Takeshi; Honma, Toshiaki.

    1988-05-01

    The resistance against earthquakes of high-temperature gas-cooled reactor (HTGR) core with block-type fuels is not fully ascertained yet. Seismic studies must be made if such a reactor plant is to be installed in areas with frequent earthquakes. The paper presented the test results of seismic behavior of a half scale two-dimensional horizontal slice core model and analysis. The following is a summary of the more important results. (1) When the core is subjected to the single axis excitation and simultaneous two-axis excitations to the core across-corners, it has elliptical motion. The core stays lumped motion at the low excitation frequencies. (2) When the load is placed on side fixed reflector blocks from outside to the core center, the core displacement and reflector impact reaction force decrease. (3) The maximum displacement occurs at simultaneous two-axis excitations. The maximum displacement occurs at the single axis excitation to the core across-flats. (4) The results of two-dimensional horizontal slice core model was compared with the results of two-dimensional vertical one. It is clarified that the seismic response of actual core can be predicted from the results of two-dimensional vertical slice core model. (5) The maximum reflector impact reaction force for seismic waves was below 60 percent of that for sinusoidal waves. (6) Vibration behavior and impact response are in good agreement between test and analysis. (author)

  10. Copula-based modeling of degree-correlated networks

    International Nuclear Information System (INIS)

    Raschke, Mathias; Schläpfer, Markus; Trantopoulos, Konstantinos

    2014-01-01

    Dynamical processes on complex networks such as information exchange, innovation diffusion, cascades in financial networks or epidemic spreading are highly affected by their underlying topologies as characterized by, for instance, degree–degree correlations. Here, we introduce the concept of copulas in order to generate random networks with an arbitrary degree distribution and a rich a priori degree–degree correlation (or ‘association’) structure. The accuracy of the proposed formalism and corresponding algorithm is numerically confirmed, while the method is tested on a real-world network of yeast protein–protein interactions. The derived network ensembles can be systematically deployed as proper null models, in order to unfold the complex interplay between the topology of real-world networks and the dynamics on top of them. (paper)

  11. A femtoscopic correlation analysis tool using the Schrödinger equation (CATS)

    Science.gov (United States)

    Mihaylov, D. L.; Mantovani Sarti, V.; Arnold, O. W.; Fabbietti, L.; Hohlweger, B.; Mathis, A. M.

    2018-05-01

    We present a new analysis framework called "Correlation Analysis Tool using the Schrödinger equation" (CATS) which computes the two-particle femtoscopy correlation function C( k), with k being the relative momentum for the particle pair. Any local interaction potential and emission source function can be used as an input and the wave function is evaluated exactly. In this paper we present a study on the sensitivity of C( k) to the interaction potential for different particle pairs: p-p, p-Λ, K^-p, K^+-p, p-Ξ ^- and Λ- Λ. For the p-p Argonne v_{18} and Reid Soft-Core potentials have been tested. For the other pair systems we present results based on strong potentials obtained from effective Lagrangians such as χ EFT for p-Λ, Jülich models for K(\\bar{K})-N and Nijmegen models for Λ-Λ. For the p-Ξ^- pairs we employ the latest lattice results from the HAL QCD collaboration. Our detailed study of different interacting particle pairs as a function of the source size and different potentials shows that femtoscopic measurements can be exploited in order to constrain the final state interactions among hadrons. In particular, small collision systems of the order of 1 fm, as produced in pp collisions at the LHC, seem to provide a suitable environment for quantitative studies of this kind.

  12. Conditional Correlation Models of Autoregressive Conditional Heteroskedasticity with Nonstationary GARCH Equations

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    -run and the short-run dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange......In this paper we investigate the effects of careful modelling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time...... by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this time-varying component is based on a sequence of specification Lagrange multiplier-type tests derived in Amado and Teräsvirta (2011). The variance equations combine the long...

  13. Analysis of Piscirickettsia salmonis Metabolism Using Genome-Scale Reconstruction, Modeling, and Testing

    Directory of Open Access Journals (Sweden)

    María P. Cortés

    2017-12-01

    Full Text Available Piscirickettsia salmonis is an intracellular bacterial fish pathogen that causes piscirickettsiosis, a disease with highly adverse impact in the Chilean salmon farming industry. The development of effective treatment and control methods for piscireckttsiosis is still a challenge. To meet it the number of studies on P. salmonis has grown in the last couple of years but many aspects of the pathogen’s biology are still poorly understood. Studies on its metabolism are scarce and only recently a metabolic model for reference strain LF-89 was developed. We present a new genome-scale model for P. salmonis LF-89 with more than twice as many genes as in the previous model and incorporating specific elements of the fish pathogen metabolism. Comparative analysis with models of different bacterial pathogens revealed a lower flexibility in P. salmonis metabolic network. Through constraint-based analysis, we determined essential metabolites required for its growth and showed that it can benefit from different carbon sources tested experimentally in new defined media. We also built an additional model for strain A1-15972, and together with an analysis of P. salmonis pangenome, we identified metabolic features that differentiate two main species clades. Both models constitute a knowledge-base for P. salmonis metabolism and can be used to guide the efficient culture of the pathogen and the identification of specific drug targets.

  14. Reliability Worth Analysis of Distribution Systems Using Cascade Correlation Neural Networks

    DEFF Research Database (Denmark)

    Heidari, Alireza; Agelidis, Vassilios; Pou, Josep

    2018-01-01

    Reliability worth analysis is of great importance in the area of distribution network planning and operation. The reliability worth's precision can be affected greatly by the customer interruption cost model used. The choice of the cost models can change system and load point reliability indices....... In this study, a cascade correlation neural network is adopted to further develop two cost models comprising a probabilistic distribution model and an average or aggregate model. A contingency-based analytical technique is adopted to conduct the reliability worth analysis. Furthermore, the possible effects...

  15. Correlation Analysis of Water Demand and Predictive Variables for Short-Term Forecasting Models

    Directory of Open Access Journals (Sweden)

    B. M. Brentan

    2017-01-01

    Full Text Available Operational and economic aspects of water distribution make water demand forecasting paramount for water distribution systems (WDSs management. However, water demand introduces high levels of uncertainty in WDS hydraulic models. As a result, there is growing interest in developing accurate methodologies for water demand forecasting. Several mathematical models can serve this purpose. One crucial aspect is the use of suitable predictive variables. The most used predictive variables involve weather and social aspects. To improve the interrelation knowledge between water demand and various predictive variables, this study applies three algorithms, namely, classical Principal Component Analysis (PCA and machine learning powerful algorithms such as Self-Organizing Maps (SOMs and Random Forest (RF. We show that these last algorithms help corroborate the results found by PCA, while they are able to unveil hidden features for PCA, due to their ability to cope with nonlinearities. This paper presents a correlation study of three district metered areas (DMAs from Franca, a Brazilian city, exploring weather and social variables to improve the knowledge of residential demand for water. For the three DMAs, temperature, relative humidity, and hour of the day appear to be the most important predictive variables to build an accurate regression model.

  16. Natural Frequency Testing and Model Correlation of Rocket Engine Structures in Liquid Hydrogen - Phase I, Cantilever Beam

    Science.gov (United States)

    Brown, Andrew M.; DeLessio, Jennifer L.; Jacobs, Preston W.

    2018-01-01

    Many structures in the launch vehicle industry operate in liquid hydrogen (LH2), from the hydrogen fuel tanks through the ducts and valves and into the pump sides of the turbopumps. Calculating the structural dynamic response of these structures is critical for successful qualification of this hardware, but accurate knowledge of the natural frequencies is based entirely on numerical or analytical predictions of frequency reduction due to the added-fluid-mass effect because testing in LH2 has always been considered too difficult and dangerous. This fluid effect is predicted to be approximately 4-5% using analytical formulations for simple cantilever beams. As part of a comprehensive test/analysis program to more accurately assess pump inducers operating in LH2, a series of frequency tests in LH2 were performed at NASA/Marshall Space Flight Center's unique cryogenic test facility. These frequency tests are coupled with modal tests in air and water to provide critical information not only on the mass effect of LH2, but also the cryogenic temperature effect on Young's Modulus for which the data is not extensive. The authors are unaware of any other reported natural frequency testing in this media. In addition to the inducer, a simple cantilever beam was also tested in the tank to provide a more easily modeled geometry as well as one that has an analytical solution for the mass effect. This data will prove critical for accurate structural dynamic analysis of these structures, which operate in a highly-dynamic environment.

  17. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  18. Correlation Functions in Holographic Minimal Models

    CERN Document Server

    Papadodimas, Kyriakos

    2012-01-01

    We compute exact three and four point functions in the W_N minimal models that were recently conjectured to be dual to a higher spin theory in AdS_3. The boundary theory has a large number of light operators that are not only invisible in the bulk but grow exponentially with N even at small conformal dimensions. Nevertheless, we provide evidence that this theory can be understood in a 1/N expansion since our correlators look like free-field correlators corrected by a power series in 1/N . However, on examining these corrections we find that the four point function of the two bulk scalar fields is corrected at leading order in 1/N through the contribution of one of the additional light operators in an OPE channel. This suggests that, to correctly reproduce even tree-level correlators on the boundary, the bulk theory needs to be modified by the inclusion of additional fields. As a technical by-product of our analysis, we describe two separate methods -- including a Coulomb gas type free-field formalism -- that ...

  19. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    Science.gov (United States)

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  20. Correlation between audiovestibular function tests and hearing outcomes in severe to profound sudden sensorineural hearing loss.

    Science.gov (United States)

    Wang, Chi-Te; Huang, Tsung-Wei; Kuo, Shih-Wei; Cheng, Po-Wen

    2009-02-01

    This study investigated whether audiovestibular function tests, namely auditory brain stem response (ABR) and vestibular-evoked myogenic potential (VEMP) tests were correlated to hearing outcomes after controlling the effects of other potential confounding factors in severe to profound sudden sensorineural hearing loss (SSHL). Eighty-eight patients with severe to profound SSHL were enrolled in this study. Pretreatment hearing levels, results of audiovestibular function tests, and final hearing outcomes were recorded from retrospective chart reviews. Other factors, including age, gender, delay of treatment, vertigo, diabetes mellitus, and hypertension, were collected as well. Comparative analysis between multiple variables and hearing outcomes was conducted using the cumulative logits model in overall subjects. Further, multivariate analysis of prognostic factors was conducted in the stratified groups of severe (70 dB HL 90 dB HL) SSHL. Multivariate analysis showed that pretreatment hearing levels, presence of vertigo, and results of ABR and VEMP testing were significant outcome predictors in the overall subjects. Stratification analysis demonstrated that both the presence of ABR and VEMP waveforms were significantly correlated with better hearing outcomes in the group of severe SSHL [ABR: adjusted odds ratio (aOR) = 14.7, 95% confidence interval (CI) = 1.78 to 122, p = 0.01; VEMP: aOR = 5.91, 95% CI = 1.18 to 29.5, p = 0.03], whereas the presence of vertigo was the only significant negative prognostic factor in the group of profound SSHL (aOR = 0.24, 95% CI = 0.06 to 0.95, p = 0.04). Other variables, including age, gender, diabetes mellitus, hypertension, and delay of treatment, were not significantly related to hearing outcomes in both groups (p > 0.05). A predictive hearing recovery table with the combined ABR and VEMP results was proposed for the group of severe SSHL. ABR and VEMP tests should be included in the battery of neurootological examinations in

  1. Analysis of the Correlation between GDP and the Final Consumption

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-09-01

    Full Text Available This paper presents the results of the researches performed by the author regarding the evolution of Gross Domestic Product. One of the main aspects of GDP analysis is the correlation with the final consumption, an important macroeconomic indicator. The evolution of the Gross Domestic Product is highly influenced by the evolution of the final consumption. To analyze the correlation, the paper proposes the use of the linear regression model, as one of the most appropriate instruments for such scientific approach. The regression model described in the article uses the GDP as resultant variable and the final consumption as factorial variable.

  2. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  3. Timescale Correlation between Marine Atmospheric Exposure and Accelerated Corrosion Testing

    Science.gov (United States)

    Montgomery, Eliza L.; Calle, Luz Marina; Curran, Jerone C.; Kolody, Mark R.

    2011-01-01

    Evaluation of metal-based structures has long relied on atmospheric exposure test sites to determine corrosion resistance in marine environments. Traditional accelerated corrosion testing relies on mimicking the exposure conditions, often incorporating salt spray and ultraviolet (UV) radiation, and exposing the metal to continuous or cyclic conditions of the corrosive environment. Their success for correlation to atmospheric exposure is often a concern when determining the timescale to which the accelerated tests can be related. Accelerated laboratory testing, which often focuses on the electrochemical reactions that occur during corrosion conditions, has yet to be universally accepted as a useful tool in predicting the long term service life of a metal despite its ability to rapidly induce corrosion. Although visual and mass loss methods of evaluating corrosion are the standard and their use is imperative, a method that correlates timescales from atmospheric exposure to accelerated testing would be very valuable. This work uses surface chemistry to interpret the chemical changes occurring on low carbon steel during atmospheric and accelerated corrosion conditions with the objective of finding a correlation between its accelerated and long-term corrosion performance. The current results of correlating data from marine atmospheric exposure conditions at the Kennedy Space Center beachside corrosion test site, alternating seawater spray, and immersion in typical electrochemical laboratory conditions, will be presented. Key words: atmospheric exposure, accelerated corrosion testing, alternating seawater spray, marine, correlation, seawater, carbon steel, long-term corrosion performance prediction, X-ray photoelectron spectroscopy.

  4. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The tests are very similar to the series of tests made for the COVA experimental program, but the vessel here is the prestressed concrete container. (orig.)

  5. Correlation of same-visit HbA1c test with laboratory-based measurements: A MetroNet study

    Directory of Open Access Journals (Sweden)

    West Patricia A

    2005-07-01

    Full Text Available Abstract Background Glycated hemoglobin (HbA1c results vary by analytical method. Use of same-visit HbA1c testing methodology holds the promise of more efficient patient care, and improved diabetes management. Our objective was to test the feasibility of introducing a same-visit HbA1c methodology into busy family practice centers (FPC and to calculate the correlation between the same-visit HbA1c test and the laboratory method that the clinical site was currently using for HbA1c testing. Methods Consecutive diabetic patients 18 years of age and older having blood samples drawn for routine laboratory analysis of HbA1c were asked to provide a capillary blood sample for same-visit testing with the BIO-RAD Micromat II. We compared the results of the same-visit test to three different laboratory methods (one FPC used two different laboratories. Results 147 paired samples were available for analysis (73 from one FPC; 74 from the other. The Pearson correlation of Micromat II and ion-exchange HPLC was 0.713 (p Conclusion For each of the laboratory methods, the correlation coefficient was lower than the 0.96 reported by the manufacturer. This might be due to variability introduced by the multiple users of the Micromat II machine. The mean HbA1c results were also consistently lower than those obtained from laboratory analysis. Additionally, the amount of dedicated time required to perform the assay may limit its usefulness in a busy clinical practice. Before introducing a same-visit HbA1c methodology, clinicians should compare the rapid results to their current method of analysis.

  6. Correlation between HRCT and pulmonary functional tests in cystic fibrosis

    International Nuclear Information System (INIS)

    Mastellari, Paola; Biggi, Simona; Lombardi, Alfonsa; Zompatori, Maurizio; Grzincich, Gianluigi; Pisi, Giovanna; Spaggiari, Cinzia

    2005-01-01

    Purpose. To compare the HRCT score by Oikonottlou and air trapping in expiratory scans with pulmonary functional tests and evaluate which radiological criteria are more useful to predict clinical impairment. Materials and methods. From January to September 2003, pulmonary HRCT study was performed in 37 patients (23 males), aged between 7 and 41 years, with cystic fibrosis. In the same day of CT examination they also received a complete functional evaluation. HRCT studies were evaluated by three radiologists blinded to the clinical data and were correlated with the lung function tests. Results. We obtained a high correlation (p=0.01) for two of the HRCT signs: extent of mucus plugging and mosaic perfusion pattern and all function tests. Discussion. Previous studies have demonstrated good correlation between lung function tests, in particular with FEV1 and HRCT signs. Our study differed from previous ones in that we analysed the correlation between lung function tests and with both single and combined CT criteria. Conclusion. Our results suggest that a simplified HRCT store could be useful to evaluate patients with cystic fibrosis [it

  7. Hypervapotron flow testing with rapid prototype models

    International Nuclear Information System (INIS)

    Driemeyer, D.; Hellwig, T.; Kubik, D.; Langenderfer, E.; Mantz, H.; McSmith, M.; Jones, B.; Butler, J.

    1995-01-01

    A flow test model of the inlet section of a three channel hypervapotron plate that has been proposed as a heat sink in the ITER divertor was prepared using a rapid prototyping stereolithography process that is widely used for component development in US industry. An existing water flow loop at the University of Illinois is being used for isothermal flow tests to collect pressure drop data for comparison with proposed vapotron friction factor correlations. Differential pressure measurements are taken, across the test section inlet manifold, the vapotron channel (about a seven inch length), the outlet manifold and the inlet-to-outlet. The differential pressures are currently measured with manometers. Tests were conducted at flow velocities from 1--10 m/s to cover the full range of ITER interest. A tap was also added for a small hypodermic needle to inject dye into the flow channel at several positions to examine the nature of the developing flow field at the entrance to the vapotron section. Follow-on flow tests are planned using a model with adjustable flow channel dimensions to permit more extensive pressure drop data to be collected. This information will be used to update vapotron design correlations for ITER

  8. Generalized canonical correlation analysis with missing values

    NARCIS (Netherlands)

    M. van de Velden (Michel); Y. Takane

    2012-01-01

    textabstractGeneralized canonical correlation analysis is a versatile technique that allows the joint analysis of several sets of data matrices. The generalized canonical correlation analysis solution can be obtained through an eigenequation and distributional assumptions are not required. When

  9. Nonlinear Analysis and Preliminary Testing Results of a Hybrid Wing Body Center Section Test Article

    Science.gov (United States)

    Przekop, Adam; Jegley, Dawn C.; Rouse, Marshall; Lovejoy, Andrew E.; Wu, Hsi-Yung T.

    2015-01-01

    A large test article was recently designed, analyzed, fabricated, and successfully tested up to the representative design ultimate loads to demonstrate that stiffened composite panels with through-the-thickness reinforcement are a viable option for the next generation large transport category aircraft, including non-conventional configurations such as the hybrid wing body. This paper focuses on finite element analysis and test data correlation of the hybrid wing body center section test article under mechanical, pressure and combined load conditions. Good agreement between predictive nonlinear finite element analysis and test data is found. Results indicate that a geometrically nonlinear analysis is needed to accurately capture the behavior of the non-circular pressurized and highly-stressed structure when the design approach permits local buckling.

  10. Correlates of HIV testing refusal among emergency department patients in the opt-out testing era.

    Science.gov (United States)

    Setse, Rosanna W; Maxwell, Celia J

    2014-05-01

    Opt-out HIV screening is recommended by the CDC for patients in all healthcare settings. We examined correlates of HIV testing refusal among urban emergency department (ED) patients. Confidential free HIV screening was offered to 32,633 ED patients in an urban tertiary care facility in Washington, DC, during May 2007-December 2011. Demographic differences in testing refusals were examined using χ(2) tests and generalized linear models. HIV testing refusal rates were 47.7 % 95 % CI (46.7-48.7), 11.7 % (11.0-12.4), 10.7 % (10.0-11.4), 16.9 % (15.9-17.9) and 26.9 % (25.6-28.2) in 2007, 2008, 2009, 2010 and 2011 respectively. Persons 33-54 years of age [adjusted prevalence ratio (APR) 1.42, (1.36-1.48)] and those ≥ 55 years [APR 1.39 (1.31-1.47)], versus 33-54 years; and females versus males [APR 1.07 (1.02-1.11)] were more likely to refuse testing. Opt-out HIV testing is feasible and sustainable in urban ED settings. Efforts are needed to encourage testing among older patients and women.

  11. Finite Element Analysis and Test Results Comparison for the Hybrid Wing Body Center Section Test Article

    Science.gov (United States)

    Przekop, Adam; Jegley, Dawn C.; Rouse, Marshall; Lovejoy, Andrew E.

    2016-01-01

    This report documents the comparison of test measurements and predictive finite element analysis results for a hybrid wing body center section test article. The testing and analysis efforts were part of the Airframe Technology subproject within the NASA Environmentally Responsible Aviation project. Test results include full field displacement measurements obtained from digital image correlation systems and discrete strain measurements obtained using both unidirectional and rosette resistive gauges. Most significant results are presented for the critical five load cases exercised during the test. Final test to failure after inflicting severe damage to the test article is also documented. Overall, good comparison between predicted and actual behavior of the test article is found.

  12. Estimation of Genetic Parameters for First Lactation Monthly Test-day Milk Yields using Random Regression Test Day Model in Karan Fries Cattle

    Directory of Open Access Journals (Sweden)

    Ajay Singh

    2016-06-01

    Full Text Available A single trait linear mixed random regression test-day model was applied for the first time for analyzing the first lactation monthly test-day milk yield records in Karan Fries cattle. The test-day milk yield data was modeled using a random regression model (RRM considering different order of Legendre polynomial for the additive genetic effect (4th order and the permanent environmental effect (5th order. Data pertaining to 1,583 lactation records spread over a period of 30 years were recorded and analyzed in the study. The variance component, heritability and genetic correlations among test-day milk yields were estimated using RRM. RRM heritability estimates of test-day milk yield varied from 0.11 to 0.22 in different test-day records. The estimates of genetic correlations between different test-day milk yields ranged 0.01 (test-day 1 [TD-1] and TD-11 to 0.99 (TD-4 and TD-5. The magnitudes of genetic correlations between test-day milk yields decreased as the interval between test-days increased and adjacent test-day had higher correlations. Additive genetic and permanent environment variances were higher for test-day milk yields at both ends of lactation. The residual variance was observed to be lower than the permanent environment variance for all the test-day milk yields.

  13. The effects of common risk factors on stock returns: A detrended cross-correlation analysis

    Science.gov (United States)

    Ruan, Qingsong; Yang, Bingchan

    2017-10-01

    In this paper, we investigate the cross-correlations between Fama and French three factors and the return of American industries on the basis of cross-correlation statistic test and multifractal detrended cross-correlation analysis (MF-DCCA). Qualitatively, we find that the return series of Fama and French three factors and American industries were overall significantly cross-correlated based on the analysis of a statistic. Quantitatively, we find that the cross-correlations between three factors and the return of American industries were strongly multifractal, and applying MF-DCCA we also investigate the cross-correlation of industry returns and residuals. We find that there exists multifractality of industry returns and residuals. The result of correlation coefficients we can verify that there exist other factors which influence the industry returns except Fama three factors.

  14. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  15. Analysis of in-R12 CHF data: influence of hydraulic diameter and heating length; test of Weisman boiling crisis model

    International Nuclear Information System (INIS)

    Czop, V.; Herer, C.; Souyri, A.; Garnier, J.

    1993-09-01

    In order to progress on the comprehensive modelling of the boiling crisis phenomenon, Electricite de France (EDF), Commissariat a l'Energie Atomique (CEA) and FRAMATOME have set up experimental programs involving in-R12 tests: the EDF APHRODITE program and the CEA-EDF-FRAMATOME DEBORA program. The first phase in these programs aims to acquire critical heat flux (CHF) data banks, within large thermal-hydraulic parameter ranges, both in cylindrical and annular configurations, and with different hydraulic diameters and heating lengths. Actually, three data banks have been considered in the analysis, all of them concerning in-R12 round tube tests: - the APHRODITE data bank, obtained at EDF with a 13 mn inside diameter, - the DEBORA data bank, obtained at CEA with a 19.2 mm inside diameter, - the KRISTA data bank, obtained at KfK with a 8 mm inside diameter. The analysis was conducted using CHF correlations and with the help of an advanced mathematical tool using pseudo-cubic thin plate type Spline functions. Two conclusions were drawn: -no influence of the heating length on our CHF results, - the influence of the diameter on the CHF cannot be simply expressed by an exponential function of this parameter, as thermal-hydraulic parameters also have an influence. Some calculations with Weisman and Pei theoretical boiling crisis model have been compared to experimental values: fairly good agreement was obtained, but further study must focus on improving the modelling of the influence of pressure and mass velocity. (authors). 12 figs., 4 tabs., 21 refs

  16. Intermittency analysis of correlated data

    International Nuclear Information System (INIS)

    Wosiek, B.

    1992-01-01

    We describe the method of the analysis of the dependence of the factorial moments on the bin size in which the correlations between the moments computed for different bin sizes are taken into account. For large multiplicity nucleus-nucleus data inclusion of the correlations does not change the values of the slope parameter, but gives errors significantly reduced as compared to the case of fits with no correlations. (author)

  17. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  18. Active earth pressure model tests versus finite element analysis

    Science.gov (United States)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  19. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    Science.gov (United States)

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Dynamics analysis of SIR epidemic model with correlation coefficients and clustering coefficient in networks.

    Science.gov (United States)

    Zhang, Juping; Yang, Chan; Jin, Zhen; Li, Jia

    2018-07-14

    In this paper, the correlation coefficients between nodes in states are used as dynamic variables, and we construct SIR epidemic dynamic models with correlation coefficients by using the pair approximation method in static networks and dynamic networks, respectively. Considering the clustering coefficient of the network, we analytically investigate the existence and the local asymptotic stability of each equilibrium of these models and derive threshold values for the prevalence of diseases. Additionally, we obtain two equivalent epidemic thresholds in dynamic networks, which are compared with the results of the mean field equations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Analysis of oligonucleotide array experiments with repeated measures using mixed models

    Directory of Open Access Journals (Sweden)

    Getchell Thomas V

    2004-12-01

    Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  2. Application of Computational Fluid Dynamics (CFD) in transonic wind-tunnel/flight-test correlation

    Science.gov (United States)

    Murman, E. M.

    1982-01-01

    The capability for calculating transonic flows for realistic configurations and conditions is discussed. Various phenomena which were modeled are shown to have the same order of magnitude on the influence of predicted results. It is concluded that CFD can make the following contributions to the task of correlating wind tunnel and flight test data: some effects of geometry differences and aeroelastic distortion can be predicted; tunnel wall effects can be assessed and corrected for; and the effects of model support systems and free stream nonuniformities can be modeled.

  3. Tensile properties of modified 9Cr-1Mo steel by shear punch testing and correlation with microstructures

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, V., E-mail: karthik@igcar.gov.in [Metallurgy and Materials Group, Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu 603102 (India); Laha, K.; Parameswaran, P.; Chandravathi, K.S.; Kasiviswanathan, K.V.; Jayakumar, T.; Raj, Baldev [Metallurgy and Materials Group, Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu 603102 (India)

    2011-10-15

    Modified 9Cr-1Mo ferritic steel (P91) is subjected to a series of heat treatments consisting of soaking for 5 min at the selected temperatures in the range 973 K-1623 K (below Ac{sub 1} to above Ac{sub 4}) followed by oil quenching and tempering at 1033 K for 1 h to obtain different microstructural conditions. The tensile properties of the different microstructural conditions are evaluated from small volumes of material by shear punch test technique. A new methodology for evaluating yield strength, ultimate tensile strength and strain hardening exponent from shear punch test by using correlation equations without employing empirical constants is presented and validated. The changes in the tensile properties are related to the microstructural changes of the steel investigated by electron microscopic studies. The steel exhibits minimum strength and hardness when soaked between Ac{sub 1} and Ac{sub 3} (intercritical range) temperatures due to the replacement of original lath martensitic structure with subgrains. The finer martensitic microstructure produced in the steel after soaking at temperatures above Ac{sub 3} leads to a monotonic increase in hardness and strength with decreasing strain hardening exponent. For soaking temperatures above Ac{sub 4}, the hardness and strength of the steel increases marginally due to the formation of soft {delta} ferrite. - Highlights: > A methodology presented for computing tensile properties from shear punch test. > UTS and strain hardening estimated using extended analysis of blanking models. > The analysis methodology validated for different heat treated 9Cr-1Mo steel. > Changes in tensile properties of steel correlated with microstructures.

  4. A meta-analysis of perceptual and cognitive functions involved in useful-field-of-view test performance.

    Science.gov (United States)

    Woutersen, Karlijn; Guadron, Leslie; van den Berg, Albert V; Boonstra, F Nienke; Theelen, Thomas; Goossens, Jeroen

    2017-12-01

    The useful-field-of-view (UFOV) test measures the amount of information someone can extract from a visual scene in one glance. Its scores show relatively strong relationships with everyday activities. The UFOV test consists of three computer tests, suggested to measure processing speed and central vision, divided attention, and selective attention. However, other functions seem to be involved as well. In order to investigate the contribution of these suggested and other perceptual and cognitive functions, we performed a meta-analysis of 116 Pearson's correlation coefficients between UFOV scores and other test scores reported in 18 peer-reviewed articles. We divided these correlations into nine domains: attention, executive functioning, general cognition, memory, spatial ability, visual closure, contrast sensitivity, visual processing speed, and visual acuity. A multivariate mixed-effects model analysis revealed that each domain correlated significantly with each of the UFOV subtest scores. These correlations were stronger for Subtests 2 and 3 than for Subtest 1. Furthermore, some domains were more strongly correlated to the UFOV than others across subtests. We did not find interaction effects between subtest and domain, indicating that none of the UFOV subtests is more selectively sensitive to a particular domain than the others. Thus, none of the three UFOV subtests seem to measure one clear construct. Instead, a range of visual and cognitive functions is involved. Perhaps this is the reason for the UFOV's high ecological validity, as it involves many functions at once, making it harder to compensate if one of them fails.

  5. Infrared imaging systems: Design, analysis, modeling, and testing; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Holst, Gerald C.

    1990-10-01

    Recent experimental and theoretical investigations in IR system design, analysis, and modeling are examined in reports and reviews. Topics discussed are modeling second-generation thermal imaging systems, performance improvement of an IR imaging system using subsystem MTF analysis, human recognition of IR images, spatial frequency performance of SPRITE detectors, and optimization of Schottky-barrier IR detectors for solar blindness. IR system testing is also considered, focusing on such problems as tolerancing methodology for an IR optical telescope, system response function approach to minimize IR testing errors, and portable MRTD collimator system for fast in situ testing of FLIRs and other thermal imagers.

  6. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  7. Generalization of proposed tendon friction correlation and its application to PCCV structural analysis

    International Nuclear Information System (INIS)

    Kashiwase, Takako; Nagasaka, Hideo

    2000-01-01

    The present paper dealt with the extension of tendon friction coefficient correlation as a function of loading end load and circumferential angle, proposed in the former paper. The extended correlation further included the effects of the number of strands contacted with sheath, tendon diameter, politicization of tendon and tendon local curvature. The validity of the correlation was confirmed by several published measured data. The structural analysis of middle cylinder part of 1/4 PCCV (Prestressed Concrete Containment Vessel) model was conducted using the present friction coefficient correlation. The results were compared with the analysis using constant friction coefficient, focused on the tendon tension force distribution. (author)

  8. Experimental testing and constitutive modeling of the mechanical properties of the swine skin tissue.

    Science.gov (United States)

    Łagan, Sylwia D; Liber-Kneć, Aneta

    2017-01-01

    The aim of the study was an estimation of the possibility of using hyperelastic material models to fit experimental data obtained in the tensile test for the swine skin tissue. The uniaxial tensile tests of samples taken from the abdomen and back of a pig was carried out. The mechanical properties of the skin such as the mean Young's modulus, the mean maximum stress and the mean maximum elongation were calculated. The experimental data have been used to identify the parameters in specific strain-energy functions given in seven constitutive models of hyperelastic materials: neo-Hookean, Mooney-Rivlin, Ogden, Yeoh, Martins, Humphrey and Veronda-Westmann. An analysis of errors in fitting of theoretical and experimental data was done. Comparison of load -displacement curves for the back and abdomen regions of skin taken showed a different scope of both the mean maximum loading forces and the mean maximum elongation. Samples which have been prepared from the abdominal area had lower values of the mean maximum load compared to samples from the spine area. The reverse trend was observed during the analysis of the values of elongation. An analysis of the accuracy of model fitting to the experimental data showed that, the least accurate were the model of neo- -Hookean, model of Mooney-Rivlin for the abdominal region and model of Veronda-Westmann for the spine region. An analysis of seven hyperelastic material models showed good correlations between the experimental and the theoretical data for five models.

  9. Experimental analysis of a nuclear reactor prestressed concrete pressure vessels model

    International Nuclear Information System (INIS)

    Vallin, C.

    1980-01-01

    A comprehensible analysis was made of the performance of each set of sensors used to measure the strain and displacement of a 1/20 scale Prestressed Concrete Pressure Vessel (PCPV) model tested at the Instituto de Pesquisas Energeticas e Nucleares (IPEN). Among the three Kinds of sensors used (strain gage, displacement transducers and load cells) the displacement transducers showed the best behavior. The displacemente transducers data was statistically analysed and a linear behavior of the model was observed during the first pressurizations tests. By means of a linear statistical correlation between experimental and expected theoretical data it was found that the model looses the linearity at a pressure between 110-125 atm. (Author) [pt

  10. SPACE Code Assessment for FLECHT Test

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Hyoung Kyoun; Min, Ji Hong; Park, Chan Eok; Park, Seok Jeong; Kim, Shin Whan [KEPCO E and C, Daejeon (Korea, Republic of)

    2015-10-15

    According to 10 CFR 50 Appendix K, Emergency Core Cooling System (ECCS) performance evaluation model during LBLOCA should be based on the data of FLECHT test. Heat transfer coefficient (HTC) and Carryout Rate Fraction (CRF) during reflood period of LBLOCA should be conservative. To develop Mass and Energy Release (MER) methodology using Safety and Performance Analysis CodE (SPACE), FLECHT test results were compared to the results calculated by SPACE. FLECHT test facility is modeled to compare the reflood HTC and CRF using SPACE. Sensitivity analysis is performed with various options for HTC correlation. Based on this result, it is concluded that the reflood HTC and CRF calculated with COBRA-TF correlation during LBLOCA meet the requirement of 10 CFR 50 Appendix K. In this study, the analysis results using SPACE predicts heat transfer phenomena of FLECHT test reasonably and conservatively. Reflood HTC for the test number of 0690, 3541 and 4225 are conservative in the reference case. In case of 6948 HTC using COBRATF is conservative to calculate film boiling region. All of analysis results for CRF have sufficient conservatism. Based on these results, it is possible to apply with COBRA-TF correlation to develop MER methodology to analyze LBLOCA using SPACE.

  11. Phenomenological analysis of angular correlations in 7 TeV proton-proton collisions from the CMS experiment

    International Nuclear Information System (INIS)

    Ray, R. L.

    2011-01-01

    A phenomenological analysis is presented of recent two-particle angular correlation data on relative pseudorapidity (η) and azimuth reported by the Compact Muon Solenoid (CMS) Collaboration for √(s)=7 TeV proton-proton collisions. The data are described with an empirical jetlike model developed for similar angular correlation measurements obtained from heavy-ion collisions at the Relativistic Heavy-Ion Collider (RHIC). The sameside (small relative azimuth), η-extended correlation structure, referred to as the ridge, is compared with three phenomenological correlation structures suggested by theoretical analysis. These include additional angular correlations due to soft gluon radiation in 2→3 partonic processes, a one-dimensional sameside correlation ridge on azimuth motivated, for example, by color-glass condensate models, and an azimuth quadrupole similar to that required to describe heavy-ion angular correlations. The quadrupole model provides the best overall description of the CMS data, including the ridge, based on χ 2 minimization in agreement with previous studies. Implications of these results with respect to possible mechanisms for producing the CMS sameside correlation ridge are discussed.

  12. Buckling Test Results and Preliminary Test and Analysis Correlation from the 8-Foot-Diameter Orthogrid-Stiffened Cylinder Test Article TA02

    Science.gov (United States)

    Hilburger, Mark W.; Waters, W. Allen, Jr.; Haynie, Waddy T.; Thornburgh, Robert P

    2017-01-01

    Results from the testing of cylinder test article SBKF-P2-CYL-TA02 (referred to herein as TA02) are presented. TA02 is an 8-foot-diameter (96-inches), 78.0-inch-long, aluminum-lithium (Al-Li), orthogrid-stiffened cylindrical shell similar to those used in current state-of-the-art launch-vehicle structures and was designed to exhibit global buckling when subjected to combined compression and bending loads. The testing was conducted at the Marshall Space Flight Center (MSFC), February 3-6, 2009, in support of the Shell Buckling Knockdown Factor Project (SBKF). The test was used to verify the performance of a newly constructed buckling test facility at MSFC and to verify the test article design and analysis approach used by the SBKF researchers.

  13. Optical nonclassicality test based on third-order intensity correlations

    Science.gov (United States)

    Rigovacca, L.; Kolthammer, W. S.; Di Franco, C.; Kim, M. S.

    2018-03-01

    We develop a nonclassicality criterion for the interference of three delayed, but otherwise identical, light fields in a three-mode Bell interferometer. We do so by comparing the prediction of quantum mechanics with those of a classical framework in which independent sources emit electric fields with random phases. In particular, we evaluate third-order correlations among output intensities as a function of the delays, and show how the presence of a correlation revival for small delays cannot be explained by the classical model of light. The observation of a revival is thus a nonclassicality signature, which can be achieved only by sources with a photon-number statistics that is highly sub-Poissonian. Our analysis provides strong evidence for the nonclassicality of the experiment discussed by Menssen et al. [Phys. Rev. Lett. 118, 153603 (2017), 10.1103/PhysRevLett.118.153603], and shows how a collective "triad" phase affects the interference of any three or more light fields, irrespective of their quantum or classical character.

  14. Multiparticle correlations and identical particle effects in the independent cluster emission model

    International Nuclear Information System (INIS)

    Ranft, J.

    1977-01-01

    In the nucleon approach to phenomenological applications, the model is compared to many different kinds of experimental data. The comparison indicates, that the model is qualitatively consistent with all available data. Analysis indicates, that identical particle effects due to the Bose statistics are present in data on joint rapidity-asimuthal correlations near Δy=ΔPHI=0. A new approach to this problem is the uncorrelated jet model with the Bose statistics. This model confirms the previous results. Furthermore, taking isospin conservation into account, the Bose correlations are predicted in π + π - channels, which should be most easily detectable in the decay of heavy resonances J/PSI

  15. Statistical model of exotic rotational correlations in emergent space-time

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan

    2017-06-06

    A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictions for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.

  16. Analysis of the irradiation data for A302B and A533B correlation monitor materials

    International Nuclear Information System (INIS)

    Wang, J.A.

    1996-04-01

    The results of Charpy V-notch impact tests for A302B and A533B-1 Correlation Monitor Materials (CMM) listed in the surveillance power reactor data base (PR-EDB) and material test reactor data base (TR-EDB) are analyzed. The shift of the transition temperature at 30 ft-lb (T 30 ) is considered as the primary measure of radiation embrittlement in this report. The hyperbolic tangent fitting model and uncertainty of the fitting parameters for Charpy impact tests are presented in this report. For the surveillance CMM data, the transition temperature shifts at 30 ft-lb (ΔT 30 ) generally follow the predictions provided by Revision 2 of Regulatory Guide 1.99 (R.G. 1.99). Difference in capsule temperatures is a likely explanation for large deviations from R.G. 1.99 predictions. Deviations from the R.G. 1.99 predictions are correlated to similar deviations for the accompanying materials in the same capsules, but large random fluctuations prevent precise quantitative determination. Significant scatter is noted in the surveillance data, some of which may be attributed to variations from one specimen set to another, or inherent in Charpy V-notch testing. The major contributions to the uncertainty of the R.G. 1.99 prediction model, and the overall data scatter are from mechanical test results, chemical analysis, irradiation environments, fluence evaluation, and inhomogeneous material properties. Thus in order to improve the prediction model, control of the above-mentioned error sources needs to be improved. In general the embrittlement behavior of both the A302B and A533B-1 plate materials is similar. There is evidence for a fluence-rate effect in the CMM data irradiated in test reactors; thus its implication on power reactor surveillance programs deserves special attention

  17. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  18. Analysis of results obtained from field tracing test under natural rain condition

    International Nuclear Information System (INIS)

    Mukai, M.; Kamiyama, H.; Tanaka, T.; Wang Zhiming; Zhao Yingjie; Li Zhengtang

    1993-01-01

    As one of the tests arranged by the cooperative research between CIRP and JAERI, field tracing tests using 3 H, 60 Co, 85 Sr and 134 Cs were conducted in pits at the CIRP's field test site located on a loess tableland under natural rain condition. Precipitation amount and evaporation rate were measured to study complicated spatial-temporal behavior of soil water movement under that condition. The evaporation rate was obtained through an analysis on the measured data by a combined method of heat balance and eddy correlation. Numerical model, that is based on piston flow assumption of soil water movement, was developed and applied to determine the behavior of the soil water movement in the pits. Using the determined water movement, 3 H migration was evaluated by numerical simulation. Change of 3 H distribution as a function of elapsed time as well explained by careful evaluation of the soil water movement that carried out before the analysis. (5 figs.)

  19. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  20. Genetic Analysis of Milk Yield Using Random Regression Test Day Model in Tehran Province Holstein Dairy Cow

    Directory of Open Access Journals (Sweden)

    A. Seyeddokht

    2012-09-01

    Full Text Available In this research a random regression test day model was used to estimate heritability values and calculation genetic correlations between test day milk records. a total of 140357 monthly test day milk records belonging to 28292 first lactation Holstein cattle(trice time a day milking distributed in 165 herd and calved from 2001 to 2010 belonging to the herds of Tehran province were used. The fixed effects of herd-year-month of calving as contemporary group and age at calving and Holstein gene percentage as covariate were fitted. Orthogonal legendre polynomial with a 4th-order was implemented to take account of genetic and environmental aspects of milk production over the course of lactation. RRM using Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data. The results showed that the average of heritability for the second half of lactation period was higher than that of the first half. The heritability value for the first month was lowest (0.117 and for the eighth month of the lactation was highest (0.230 compared to the other months of lactation. Because of genetic variation was increased gradually, and residual variance was high in the first months of lactation, heritabilities were different over the course of lactation. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. In this research estimation of genetic parameters, and calculation genetic correlations were implemented by random regression test day model, therefore using this method is the exact way to take account of parameters rather than the other ways.

  1. Tensile properties of cooked meat sausages and their correlation with texture profile analysis (TPA) parameters and physico-chemical characteristics.

    Science.gov (United States)

    Herrero, A M; de la Hoz, L; Ordóñez, J A; Herranz, B; Romero de Ávila, M D; Cambero, M I

    2008-11-01

    The possibilities of using breaking strength (BS) and energy to fracture (EF) for monitoring textural properties of some cooked meat sausages (chopped, mortadella and galantines) were studied. Texture profile analysis (TPA), folding test and physico-chemical measurements were also performed. Principal component analysis enabled these meat products to be grouped into three textural profiles which showed significant (p<0.05) differences mainly for BS, hardness, adhesiveness and cohesiveness. Multivariate analysis indicated that BS, EF and TPA parameters were correlated (p<0.05) for every individual meat product (chopped, mortadella and galantines) and all products together. On the basis of these results, TPA parameters could be used for constructing regression models to predict BS. The resulting regression model for all cooked meat products was BS=-0.160+6.600∗cohesiveness-1.255∗adhesiveness+0.048∗hardness-506.31∗springiness (R(2)=0.745, p<0.00005). Simple linear regression analysis showed significant coefficients of determination between BS (R(2)=0.586, p<0.0001) versus folding test grade (FG) and EF versus FG (R(2)=0.564, p<0.0001).

  2. Statistical analysis of angular correlation measurements

    International Nuclear Information System (INIS)

    Oliveira, R.A.A.M. de.

    1986-01-01

    Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt

  3. The Cross-Correlation and Reshuffling Tests in Discerning Induced Seismicity

    Science.gov (United States)

    Schultz, Ryan; Telesca, Luciano

    2018-05-01

    In recent years, cases of newly emergent induced clusters have increased seismic hazard and risk in locations with social, environmental, and economic consequence. Thus, the need for a quantitative and robust means to discern induced seismicity has become a critical concern. This paper reviews a Matlab-based algorithm designed to quantify the statistical confidence between two time-series datasets. Similar to prior approaches, our method utilizes the cross-correlation to delineate the strength and lag of correlated signals. In addition, use of surrogate reshuffling tests allows for the dynamic testing against statistical confidence intervals of anticipated spurious correlations. We demonstrate the robust nature of our algorithm in a suite of synthetic tests to determine the limits of accurate signal detection in the presence of noise and sub-sampling. Overall, this routine has considerable merit in terms of delineating the strength of correlated signals, one of which includes the discernment of induced seismicity from natural.

  4. Analysis of Phenix End-of-Life asymmetry test with multi-dimensional pool modeling of MARS-LMR code

    International Nuclear Information System (INIS)

    Jeong, H.-Y.; Ha, K.-S.; Choi, C.-W.; Park, M.-G.

    2015-01-01

    Highlights: • Pool behaviors under asymmetrical condition in an SFR were evaluated with MARS-LMR. • The Phenix asymmetry test was analyzed one-dimensionally and multi-dimensionally. • One-dimensional modeling has limitation to predict the cold pool temperature. • Multi-dimensional modeling shows improved prediction of stratification and mixing. - Abstract: The understanding of complicated pool behaviors and its modeling is essential for the design and safety analysis of a pool-type Sodium-cooled Fast Reactor. One of the remarkable recent efforts on the study of pool thermal–hydraulic behaviors is the asymmetrical test performed as a part of Phenix End-of-Life tests by the CEA. To evaluate the performance of MARS-LMR code, which is a key system analysis tool for the design of an SFR in Korea, in the prediction of thermal hydraulic behaviors during an asymmetrical condition, the Phenix asymmetry test is analyzed with MARS-LMR in the present study. Pool regions are modeled with two different approaches, one-dimensional modeling and multi-dimensional one, and the prediction results are analyzed to identify the appropriateness of each modeling method. The prediction with one-dimensional pool modeling shows a large deviation from the measured data at the early stage of the test, which suggests limitations to describe the complicated thermal–hydraulic phenomena. When the pool regions are modeled multi-dimensionally, the prediction gives improved results quite a bit. This improvement is explained by the enhanced modeling of pool mixing with the multi-dimensional modeling. On the basis of the results from the present study, it is concluded that an accurate modeling of pool thermal–hydraulics is a prerequisite for the evaluation of design performance and safety margin quantification in the future SFR developments

  5. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  6. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

    OpenAIRE

    Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

    1996-01-01

    We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

  7. Whole-tumour diffusion kurtosis MR imaging histogram analysis of rectal adenocarcinoma: Correlation with clinical pathologic prognostic factors.

    Science.gov (United States)

    Cui, Yanfen; Yang, Xiaotang; Du, Xiaosong; Zhuo, Zhizheng; Xin, Lei; Cheng, Xintao

    2018-04-01

    To investigate potential relationships between diffusion kurtosis imaging (DKI)-derived parameters using whole-tumour volume histogram analysis and clinicopathological prognostic factors in patients with rectal adenocarcinoma. 79 consecutive patients who underwent MRI examination with rectal adenocarcinoma were retrospectively evaluated. Parameters D, K and conventional ADC were measured using whole-tumour volume histogram analysis. Student's t-test or Mann-Whitney U-test, receiver operating characteristic curves and Spearman's correlation were used for statistical analysis. Almost all the percentile metrics of K were correlated positively with nodal involvement, higher histological grades, the presence of lymphangiovascular invasion (LVI) and circumferential margin (CRM) (phistogram analysis, especially K parameters, were associated with important prognostic factors of rectal cancer. • K correlated positively with some important prognostic factors of rectal cancer. • K mean showed higher AUC and specificity for differentiation of nodal involvement. • DKI metrics with whole-tumour volume histogram analysis depicted tumour heterogeneity.

  8. 40 CFR 86.163-00 - Spot check correlation procedures for vehicles tested using a simulation of the environmental...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Spot check correlation procedures for... Complete Heavy-Duty Vehicles; Test Procedures § 86.163-00 Spot check correlation procedures for vehicles... running change approval, each model year for any manufacturer undergoing the spot checking procedures of...

  9. An Econometric Analysis of Modulated Realised Covariance, Regression and Correlation in Noisy Diffusion Models

    DEFF Research Database (Denmark)

    Kinnebrock, Silja; Podolskij, Mark

    This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...... process can be relaxed and how our method can be applied to non-synchronous observations. We also present an empirical study of how high-frequency correlations, regressions and covariances change through time....

  10. Post-test analysis of PANDA test P4

    International Nuclear Information System (INIS)

    Hart, J.; Woudstra, A.; Koning, H.

    1999-01-01

    The results of a post-test analysis of the integral system test P4, which has been executed in the PANDA facility at PSI in Switzerland within the framework of Work Package 2 of the TEPSS project are presented. The post-test analysis comprises an evaluation of the PANDA test P4 and a comparison of the test results with the results of simulations using the RELAPS/MOD3.2, TRAC-BF1, and MELCOR 1.8.4 codes. The PANDA test P4 has provided data about how trapped air released from the drywell later in the transient affects PCCS performance in an adequate manner. The well-defined measurements can serve as an important database for the assessment of thermal hydraulic system analysis codes, especially for conditions that could be met in passively operated advanced reactors, i.e. low pressure and small driving forces. Based on the analysis of the test data, the test acceptance criteria have been met. The test P4 has been successfully completed and the instrument readings were with the permitted ranges. The PCCs showed a favorable and robust performance and a wide margin for decay heat removal from the containment. The PANDA P4 test demonstrated that trapped air, released from the drywell later in the transient, only temporarily and only slightly affected the performance of the passive containment cooling system. The analysis of the results of the RELAPS code showed that the overall behaviour of the test has been calculated quite well with regards to pressure, mass flow rates, and pool boil-down. This accounts both for the pre-test and the post-test simulations. However, due to the one-dimensional, stacked-volume modeling of the PANDA DW, WW, and GDCS vessels, 3D-effects such as in-vessel mixing and recirculation could not be calculated. The post-test MELCOR simulation showed an overall behaviour that is comparable to RELAPS. However, MELCOR calculated almost no air trapping in the PCC tubes that could hinder the steam condensation rate. This resulted in lower calculated

  11. FASTSAT-HSV01 Thermal Math Model Correlation

    Science.gov (United States)

    McKelvey, Callie

    2011-01-01

    This paper summarizes the thermal math model correlation effort for the Fast Affordable Science and Technology SATellite (FASTSAT-HSV01), which was designed, built and tested by NASA's Marshall Space Flight Center (MSFC) and multiple partners. The satellite launched in November 2010 on a Minotaur IV rocket from the Kodiak Launch Complex in Kodiak, Alaska. It carried three Earth science experiments and two technology demonstrations into a low Earth circular orbit with an inclination of 72deg and an altitude of 650 kilometers. The mission has been successful to date with science experiment activities still taking place daily. The thermal control system on this spacecraft was a passive design relying on thermo-optical properties and six heaters placed on specific components. Flight temperature data is being recorded every minute from the 48 Resistance Temperature Devices (RTDs) onboard the satellite structure and many of its avionics boxes. An effort has been made to correlate the thermal math model to the flight temperature data using Cullimore and Ring's Thermal Desktop and by obtaining Earth and Sun vector data from the Attitude Control System (ACS) team to create an "as-flown" orbit. Several model parameters were studied during this task to understand the spacecraft's sensitivity to these changes. Many "lessons learned" have been noted from this activity that will be directly applicable to future small satellite programs.

  12. Scale model test results for an inverted U-tube steam generator with comparisons to heat transfer correlations

    International Nuclear Information System (INIS)

    Boucher, T.J.

    1987-01-01

    To provide data for assessment and development of thermal-hydraulic computer codes, bottom main feedwater-line-break transient simulations were performed in a scale model (Semiscale Mod-2C) of a pressurized water reactor (PWR) with conditions typical of a PWR (15.0 MPa primary pressure, 600 K steam generator inlet plenum fluid temperatures, 6.2 MPa secondary pressure). The state-of-the-art measurements in the scale model (Type III) steam generator allow for the determination of U-tube steam generator allow for the determination of U-tube steam generator secondary component interactions, tube bundle local radial heat transfer, and tube bundle and riser vapor void fractions for steady state and transient operations. To enhance the understanding of the observed phenomena, the component interactions, local heat fluxes, local secondary convective heat transfer coefficients and local vapor void fractions are discussed for steady state, full-power and transient operations. Comparisons between the measurement-derived secondary convective heat transfer coefficients and those predicted by a number of correlations, including the Chen correlation currently used in thermal-hydraulic computer codes, show that none of the correlations adequately predict the data and points out the need for the formulation of a new correlation based on this experimental data. The unique information presented herein should be of the interest to anyone involved in modeling inverted U-tube steam generator thermal-hydraulics for forced convection boiling/vaporization heat transfer. 5 refs., 13 figs., 1 tab

  13. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  14. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients

    DEFF Research Database (Denmark)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte

    2017-01-01

    -derived food waste amounted to 2.21 ± 3.12% with a confidence interval of (−4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson’s correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste...... and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data......, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients....

  15. Charge correlations as definitive tests of QCD

    International Nuclear Information System (INIS)

    Maxwell, C.J.

    1981-07-01

    Certain weighted charge correlations are defined and it is shown how they can be used to measure properties of the gluon jet in the e + e - 3-jet final state. Properties are suggested which are indicative of the form of the QCD matrix element, the running coupling constant and value of Λ, and hence constitute definitive tests of QCD. The recent near tenfold increase in luminosity at PETRA should make such experimental tests possible in the near future. (author)

  16. Correlations of 3T DCE-MRI Quantitative Parameters with Microvessel Density in a Human-Colorectal-Cancer Xenograft Mouse Model

    International Nuclear Information System (INIS)

    Ahn, Sung Jun; An, Chan Sik; Koom, Woong Sub; Song, Ho Taek; Suh, Jin Suck

    2011-01-01

    To investigate the correlation between quantitative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) parameters and microvascular density (MVD) in a human-colon-cancer xenograft mouse model using 3 Tesla MRI. A human-colon-cancer xenograft model was produced by subcutaneously inoculating 1 X 106 DLD-1 human-colon-cancer cells into the right hind limbs of 10 mice. The tumors were allowed to grow for two weeks and then assessed using MRI. DCE-MRI was performed by tail vein injection of 0.3 mmol/kg of gadolinium. A region of interest (ROI) was drawn at the midpoints along the z-axes of the tumors, and a Tofts model analysis was performed. The quantitative parameters (Ktrans, Kep and Ve) from the whole transverse ROI and the hotspot ROI of the tumor were calculated. Immunohistochemical microvessel staining was performed and analyzed according to Weidner's criteria at the corresponding MRI sections. Additional Hematoxylin and Eosin staining was performed to evaluate tumor necrosis. The Mann-Whitney test and Spearman's rho correlation analysis were performed to prove the existence of a correlation between the quantitative parameters, necrosis, and MVD. Whole transverse ROI of the tumor showed no significant relationship between the MVD values and quantitative DCE-MRI parameters. In the hotspot ROI, there was a difference in MVD between low and high group of Ktrans and Kep that had marginally statistical significance (ps = 0.06 and 0.07, respectively). Also, Ktrans and Kep were found to have an inverse relationship with MVD (r -0.61, p = 0.06 in Ktrans; r = -0.60, p = 0.07 in Kep). Quantitative analysis of T1-weighted DCE-MRI using hotspot ROI may provide a better histologic match than whole transverse section ROI. Within the hotspots, Ktrans and Kep tend to have a reverse correlation with MVD in this colon cancer mouse model.

  17. Correlations of 3T DCE-MRI Quantitative Parameters with Microvessel Density in a Human-Colorectal-Cancer Xenograft Mouse Model

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sung Jun; An, Chan Sik; Koom, Woong Sub; Song, Ho Taek; Suh, Jin Suck [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2011-11-15

    To investigate the correlation between quantitative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) parameters and microvascular density (MVD) in a human-colon-cancer xenograft mouse model using 3 Tesla MRI. A human-colon-cancer xenograft model was produced by subcutaneously inoculating 1 X 106 DLD-1 human-colon-cancer cells into the right hind limbs of 10 mice. The tumors were allowed to grow for two weeks and then assessed using MRI. DCE-MRI was performed by tail vein injection of 0.3 mmol/kg of gadolinium. A region of interest (ROI) was drawn at the midpoints along the z-axes of the tumors, and a Tofts model analysis was performed. The quantitative parameters (Ktrans, Kep and Ve) from the whole transverse ROI and the hotspot ROI of the tumor were calculated. Immunohistochemical microvessel staining was performed and analyzed according to Weidner's criteria at the corresponding MRI sections. Additional Hematoxylin and Eosin staining was performed to evaluate tumor necrosis. The Mann-Whitney test and Spearman's rho correlation analysis were performed to prove the existence of a correlation between the quantitative parameters, necrosis, and MVD. Whole transverse ROI of the tumor showed no significant relationship between the MVD values and quantitative DCE-MRI parameters. In the hotspot ROI, there was a difference in MVD between low and high group of Ktrans and Kep that had marginally statistical significance (ps = 0.06 and 0.07, respectively). Also, Ktrans and Kep were found to have an inverse relationship with MVD (r -0.61, p = 0.06 in Ktrans; r = -0.60, p = 0.07 in Kep). Quantitative analysis of T1-weighted DCE-MRI using hotspot ROI may provide a better histologic match than whole transverse section ROI. Within the hotspots, Ktrans and Kep tend to have a reverse correlation with MVD in this colon cancer mouse model.

  18. Ontological Analysis of Integrated Process Models: testing hypotheses

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    2001-11-01

    Full Text Available Integrated process modeling is achieving prominence in helping to document and manage business administration and IT processes in organizations. The ARIS framework is a popular example for a framework of integrated process modeling not least because it underlies the 800 or more reference models embedded in the world's most popular ERP package, SAP R/3. This paper demonstrates the usefulness of the Bunge-Wand-Weber (BWW representation model for evaluating modeling grammars such as those constituting ARIS. It reports some initial insights gained from pilot testing Green and Rosemann's (2000 evaluative propositions. Even when considering all five views of ARIS, modelers have problems representing business rules, the scope and boundary of systems, and decomposing models. However, even though it is completely ontologically redundant, users still find the function view useful in modeling.

  19. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  20. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  1. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    Science.gov (United States)

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  2. MODELING STRATEGIES FOR THE ANALYSIS OF EXPERIMENTS IN AUGMENTED BLOCK DESIGN IN CLONAL TESTS OF Eucalyptus spp.

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo Rodrigues Prado

    2013-08-01

    Full Text Available http://dx.doi.org/10.5902/1980509810546The objective of this work was to compare analyses of experiment strategies when there is a large number of clones and a reduced number of seedlings to be evaluated. Data from girth at breast height of two seasons of evaluation, 30 and 90 months, from a clonal test of Eucalyptus were analyzed in three locations. The experiments were carried out in the augmented block design with 400 regular clones distributed in 20 blocks and with four common clones (controls.  Each plot consisted of five plants spaced 3 x 3 meters. The individual statistic analyses were carried out by season and local, a combined one by local at each season and a combined one involving the three locals and the two seasons. Each analysis was carried out according to two models: augmented design (AD and one way classification (OWC. The variance components, the heritability, the Speaman’s rank correlation and the coincidence indexes in the clone selection at the two models were estimated. It was found that the augmented block design and the one way classification provide similar results in Eucalyptus clone evaluation. The coincidence indexes between the two models in the clone selection, in general, were high, showing values of 100% in the local combined analyses at 90 months. The Spearman’s rank

  3. A powerful nonparametric method for detecting differentially co-expressed genes: distance correlation screening and edge-count test.

    Science.gov (United States)

    Zhang, Qingyang

    2018-05-16

    Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.

  4. Efficient statistical tests to compare Youden index: accounting for contingency correlation.

    Science.gov (United States)

    Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan

    2015-04-30

    Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.

  5. A hybrid correlation analysis with application to imaging genetics

    Science.gov (United States)

    Hu, Wenxing; Fang, Jian; Calhoun, Vince D.; Wang, Yu-Ping

    2018-03-01

    Investigating the association between brain regions and genes continues to be a challenging topic in imaging genetics. Current brain region of interest (ROI)-gene association studies normally reduce data dimension by averaging the value of voxels in each ROI. This averaging may lead to a loss of information due to the existence of functional sub-regions. Pearson correlation is widely used for association analysis. However, it only detects linear correlation whereas nonlinear correlation may exist among ROIs. In this work, we introduced distance correlation to ROI-gene association analysis, which can detect both linear and nonlinear correlations and overcome the limitation of averaging operations by taking advantage of the information at each voxel. Nevertheless, distance correlation usually has a much lower value than Pearson correlation. To address this problem, we proposed a hybrid correlation analysis approach, by applying canonical correlation analysis (CCA) to the distance covariance matrix instead of directly computing distance correlation. Incorporating CCA into distance correlation approach may be more suitable for complex disease study because it can detect highly associated pairs of ROI and gene groups, and may improve the distance correlation level and statistical power. In addition, we developed a novel nonlinear CCA, called distance kernel CCA, which seeks the optimal combination of features with the most significant dependence. This approach was applied to imaging genetic data from the Philadelphia Neurodevelopmental Cohort (PNC). Experiments showed that our hybrid approach produced more consistent results than conventional CCA across resampling and both the correlation and statistical significance were increased compared to distance correlation analysis. Further gene enrichment analysis and region of interest (ROI) analysis confirmed the associations of the identified genes with brain ROIs. Therefore, our approach provides a powerful tool for finding

  6. [Simulation and data analysis of stereological modeling based on virtual slices].

    Science.gov (United States)

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  7. Correlation between static radiographic measurements and intersegmental angular measurements during gait using a multisegment foot model.

    Science.gov (United States)

    Lee, Dong Yeon; Seo, Sang Gyo; Kim, Eo Jin; Kim, Sung Ju; Lee, Kyoung Min; Farber, Daniel C; Chung, Chin Youb; Choi, In Ho

    2015-01-01

    Radiographic examination is a widely used evaluation method in the orthopedic clinic. However, conventional radiography alone does not reflect the dynamic changes between foot and ankle segments during gait. Multiple 3-dimensional multisegment foot models (3D MFMs) have been introduced to evaluate intersegmental motion of the foot. In this study, we evaluated the correlation between static radiographic indices and intersegmental foot motion indices. One hundred twenty-five females were tested. Static radiographs of full-leg and anteroposterior (AP) and lateral foot views were performed. For hindfoot evaluation, we measured the AP tibiotalar angle (TiTA), talar tilt (TT), calcaneal pitch, lateral tibiocalcaneal angle, and lateral talcocalcaneal angle. For the midfoot segment, naviculocuboid overlap and talonavicular coverage angle were calculated. AP and lateral talo-first metatarsal angles and metatarsal stacking angle (MSA) were measured to assess the forefoot. Hallux valgus angle (HVA) and hallux interphalangeal angle were measured. In gait analysis by 3D MFM, intersegmental angle (ISA) measurements of each segment (hallux, forefoot, hindfoot, arch) were recorded. ISAs at midstance phase were most highly correlated with radiography. Significant correlations were observed between ISA measurements using MFM and static radiographic measurements in the same segment. In the hindfoot, coronal plane ISA was correlated with AP TiTA (P foot motion indices at midstance phase during gait measured by 3D MFM gait analysis were correlated with the conventional radiographic indices. The observed correlation between MFM measurements at midstance phase during gait and static radiographic measurements supports the fundamental basis for the use of MFM in analysis of dynamic motion of foot segment during gait. © The Author(s) 2014.

  8. Correlation between Colon Transit Time Test Value and Initial Maintenance Dose of Laxative in Children with Chronic Functional Constipation

    Science.gov (United States)

    Kim, Mock Ryeon; Park, Hye Won; Son, Jae Sung; Lee, Ran

    2016-01-01

    Purpose To evaluate the correlation between colon transit time (CTT) test value and initial maintenance dose of polyethylene glycol (PEG) 4000 or lactulose. Methods Of 415 children with chronic functional constipation, 190 were enrolled based on exclusion criteria using the CTT test, defecation diary, and clinical chart. The CTT test was performed with prior disimpaction. The laxative dose for maintenance was determined on the basis of the defecation diary and clinical chart. The Shapiro-Wilk test and Pearson's and Spearman's correlations were used for statistical analysis. Results The overall group median value and interquartile range of the CTT test was 43.8 (31.8) hours. The average PEG 4000 dose for maintenance in the overall group was 0.68±0.18 g/kg/d; according to age, the dose was 0.73±0.16 g/kg/d (encopresis, abnormal CTT test subtype) for either laxative. Even in the largest group (overall, n=109, younger than 8 years and on PEG 4000), the correlation was weak (Pearson's correlation coefficient [R]=0.268, p=0.005). Within the abnormal transit group, subgroup (n=73, younger than 8 years and on PEG 4000) correlation was weak (R=0.267, p=0.022). Conclusion CTT test value cannot predict the initial maintenance dose of PEG 4000 or lactulose with linear correlation. PMID:27738600

  9. Statistical analysis of correlated experimental data and neutron cross section evaluation

    International Nuclear Information System (INIS)

    Badikov, S.A.

    1998-01-01

    The technique for evaluation of neutron cross sections on the basis of statistical analysis of correlated experimental data is presented. The most important stages of evaluation beginning from compilation of correlation matrix for measurement uncertainties till representation of the analysis results in the ENDF-6 format are described in details. Special attention is paid to restrictions (positive uncertainty) on covariation matrix of approximate parameters uncertainties generated within the method of least square fit which is derived from physical reasons. The requirements for source experimental data assuring satisfaction of the restrictions mentioned above are formulated. Correlation matrices of measurement uncertainties in particular should be also positively determined. Variants of modelling the positively determined correlation matrices of measurement uncertainties in a situation when their consequent calculation on the basis of experimental information is impossible are discussed. The technique described is used for creating the new generation of estimates of dosimetric reactions cross sections for the first version of the Russian dosimetric file (including nontrivial covariation information)

  10. Phenomenological analysis of quantum level correlations and classical repulsion effects in SU(3) model

    International Nuclear Information System (INIS)

    Fujiwara, Shigeyasu; Sakata, Fumihiko

    2003-01-01

    The quantum level fluctuation in various systems has been shown to be characterized by the random matrix theory, and to be related to a regular-to-chaos transition in classical system. We present a new qualitative analysis of quantum and classical fluctuation properties by exploiting correlation coefficients and variances. It is shown that the correlation coefficient of quantum level density is inversely proportional to the variance of consecutive phase-space point spacings on the Poincare section plane. (author)

  11. Model to Test Electric Field Comparisons in a Composite Fairing Cavity

    Science.gov (United States)

    Trout, Dawn H.; Burford, Janessa

    2013-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.

  12. Hepatobiliary magnetic resonance imaging in patients with liver disease: correlation of liver enhancement with biochemical liver function tests

    Energy Technology Data Exchange (ETDEWEB)

    Kukuk, Guido M.; Schaefer, Stephanie G.; Hadizadeh, Dariusch R.; Schild, Hans H.; Willinek, Winfried A. [University of Bonn, Department of Radiology, Bonn (Germany); Fimmers, Rolf [University of Bonn, Department of Medical Biometry, Informatics and Epidemiology, Bonn (Germany); Ezziddin, Samer [Department of Nuclear Medicine, Bonn (Germany); Spengler, Ulrich [Department of Internal Medicine I, Bonn (Germany)

    2014-10-15

    To evaluate hepatobiliary magnetic resonance imaging (MRI) using Gd-EOB-DTPA in relation to various liver function tests in patients with liver disorders. Fifty-one patients with liver disease underwent Gd-EOB-DTPA-enhanced liver MRI. Based on region-of-interest (ROI) analysis, liver signal intensity was calculated using the spleen as reference tissue. Liver-spleen contrast ratio (LSCR) and relative liver enhancement (RLE) were calculated. Serum levels of total bilirubin, gamma glutamyl transpeptidase (GGT), aspartate aminotransferase (AST), alanine aminotransferase (ALT), glutamate dehydrogenase (GLDH), lactate dehydrogenase (LDH), serum albumin level (AL), prothrombin time (PT), creatinine (CR) as well as international normalised ratio (INR) and model for end-stage liver disease (MELD) score were tested for correlation with LSCR and RLE. Pre-contrast LSCR values correlated with total bilirubin (r = -0.39; p = 0.005), GGT (r = -0.37; p = 0.009), AST (r = -0.38; p = 0.013), ALT (r = -0.29; p = 0.046), PT (r = 0.52; p < 0.001), GLDH (r = -0.55; p = 0.044), INR (r = -0.42; p = 0.003), and MELD Score (r = -0.53; p < 0.001). After administration of Gd-EOB-DTPA bilirubin (r = -0.45; p = 0.001), GGT (r = -0.40; p = 0.004), PT (r = 0.54; p < 0.001), AST (r = -0.46; p = 0.002), ALT (r = -0.31; p = 0.030), INR (r = -0.45; p = 0.001) and MELD Score (r = -0.56; p < 0.001) significantly correlated with LSCR. RLE correlated with bilirubin (r = -0.40; p = 0.004), AST (r = -0.38; p = 0.013), PT (r = 0.42; p = 0.003), GGT (r = -0.33; p = 0.020), INR (r = -0.36; p = 0.011) and MELD Score (r = -0.43; p = 0.003). Liver-spleen contrast ratio and relative liver enhancement using Gd-EOB-DTPA correlate with a number of routinely used biochemical liver function tests, suggesting that hepatobiliary MRI may serve as a valuable biomarker for liver function. The strongest correlation with liver enhancement was found for the MELD Score. (orig.)

  13. Correlation between detrended fluctuation analysis and the Lempel-Ziv complexity in nonlinear time series analysis

    International Nuclear Information System (INIS)

    Tang You-Fu; Liu Shu-Lin; Jiang Rui-Hong; Liu Ying-Hui

    2013-01-01

    We study the correlation between detrended fluctuation analysis (DFA) and the Lempel-Ziv complexity (LZC) in nonlinear time series analysis in this paper. Typical dynamic systems including a logistic map and a Duffing model are investigated. Moreover, the influence of Gaussian random noise on both the DFA and LZC are analyzed. The results show a high correlation between the DFA and LZC, which can quantify the non-stationarity and the nonlinearity of the time series, respectively. With the enhancement of the random component, the exponent a and the normalized complexity index C show increasing trends. In addition, C is found to be more sensitive to the fluctuation in the nonlinear time series than α. Finally, the correlation between the DFA and LZC is applied to the extraction of vibration signals for a reciprocating compressor gas valve, and an effective fault diagnosis result is obtained

  14. Non-Normality and Testing that a Correlation Equals Zero

    Science.gov (United States)

    Levy, Kenneth J.

    1977-01-01

    The importance of the assumption of normality for testing that a bivariate normal correlation equals zero is examined. Both empirical and theoretical evidence suggest that such tests are robust with respect to violation of the normality assumption. (Author/JKS)

  15. Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Jae Kwon Kim

    2017-01-01

    Full Text Available Background. Of the machine learning techniques used in predicting coronary heart disease (CHD, neural network (NN is popularly used to improve performance accuracy. Objective. Even though NN-based systems provide meaningful results based on clinical experiments, medical experts are not satisfied with their predictive performances because NN is trained in a “black-box” style. Method. We sought to devise an NN-based prediction of CHD risk using feature correlation analysis (NN-FCA using two stages. First, the feature selection stage, which makes features acceding to the importance in predicting CHD risk, is ranked, and second, the feature correlation analysis stage, during which one learns about the existence of correlations between feature relations and the data of each NN predictor output, is determined. Result. Of the 4146 individuals in the Korean dataset evaluated, 3031 had low CHD risk and 1115 had CHD high risk. The area under the receiver operating characteristic (ROC curve of the proposed model (0.749 ± 0.010 was larger than the Framingham risk score (FRS (0.393 ± 0.010. Conclusions. The proposed NN-FCA, which utilizes feature correlation analysis, was found to be better than FRS in terms of CHD risk prediction. Furthermore, the proposed model resulted in a larger ROC curve and more accurate predictions of CHD risk in the Korean population than the FRS.

  16. Summary of CPAS EDU Testing Analysis Results

    Science.gov (United States)

    Romero, Leah M.; Bledsoe, Kristin J.; Davidson, John.; Engert, Meagan E.; Fraire, Usbaldo, Jr.; Galaviz, Fernando S.; Galvin, Patrick J.; Ray, Eric S.; Varela, Jose

    2015-01-01

    The Orion program's Capsule Parachute Assembly System (CPAS) project is currently conducting its third generation of testing, the Engineering Development Unit (EDU) series. This series utilizes two test articles, a dart-shaped Parachute Compartment Drop Test Vehicle (PCDTV) and capsule-shaped Parachute Test Vehicle (PTV), both of which include a full size, flight-like parachute system and require a pallet delivery system for aircraft extraction. To date, 15 tests have been completed, including six with PCDTVs and nine with PTVs. Two of the PTV tests included the Forward Bay Cover (FBC) provided by Lockheed Martin. Advancements in modeling techniques applicable to parachute fly-out, vehicle rate of descent, torque, and load train, also occurred during the EDU testing series. An upgrade from a composite to an independent parachute simulation allowed parachute modeling at a higher level of fidelity than during previous generations. The complexity of separating the test vehicles from their pallet delivery systems necessitated the use the Automatic Dynamic Analysis of Mechanical Systems (ADAMS) simulator for modeling mated vehicle aircraft extraction and separation. This paper gives an overview of each EDU test and summarizes the development of CPAS analysis tools and techniques during EDU testing.

  17. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  18. Boundary correlators in supergroup WZNW models

    Energy Technology Data Exchange (ETDEWEB)

    Creutzig, T.; Schomerus, V.

    2008-04-15

    We investigate correlation functions for maximally symmetric boundary conditions in the WZNW model on GL(11). Special attention is payed to volume filling branes. Generalizing earlier ideas for the bulk sector, we set up a Kac-Wakimotolike formalism for the boundary model. This first order formalism is then used to calculate bulk-boundary 2-point functions and the boundary 3-point functions of the model. The note ends with a few comments on correlation functions of atypical fields, point-like branes and generalizations to other supergroups. (orig.)

  19. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  20. Detrended cross-correlation analysis on RMB exchange rate and Hang Seng China Enterprises Index

    Science.gov (United States)

    Ruan, Qingsong; Yang, Bingchan; Ma, Guofeng

    2017-02-01

    In this paper, we investigate the cross-correlations between the Hang Seng China Enterprises Index and RMB exchange markets on the basis of a cross-correlation statistic test and multifractal detrended cross-correlation analysis (MF-DCCA). MF-DCCA has, at best, serious limitations for most of the signals describing complex natural processes and often indicates multifractal cross-correlations when there are none. In order to prevent these false multifractal cross-correlations, we apply MFCCA to verify the cross-correlations. Qualitatively, we find that the return series of the Hang Seng China Enterprises Index and RMB exchange markets were, overall, significantly cross-correlated based on the statistical analysis. Quantitatively, we find that the cross-correlations between the stock index and RMB exchange markets were strongly multifractal, and the multifractal degree of the onshore RMB exchange markets was somewhat larger than the offshore RMB exchange markets. Moreover, we use the absolute return series to investigate and confirm the fact of multifractality. The results from the rolling windows show that the short-term cross-correlations between volatility series remain high.

  1. Correlation functions of the Ising model and the eight-vertex model

    International Nuclear Information System (INIS)

    Ko, L.F.

    1986-01-01

    Calculations for the two-point correlation functions in the scaling limit for two statistical models are presented. In Part I, the Ising model with a linear defect is studied for T T/sub c/. The transfer matrix method of Onsager and Kaufman is used. The energy-density correlation is given by functions related to the modified Bessel functions. The dispersion expansion for the spin-spin correlation functions are derived. The dominant behavior for large separations at T not equal to T/sub c/ is extracted. It is shown that these expansions lead to systems of Fredholm integral equations. In Part II, the electric correlation function of the eight-vertex model for T < T/sub c/ is studied. The eight vertex model decouples to two independent Ising models when the four spin coupling vanishes. To first order in the four-spin coupling, the electric correlation function is related to a three-point function of the Ising model. This relation is systematically investigated and the full dispersion expansion (to first order in four-spin coupling) is obtained. The results is a new kind of structure which, unlike those of many solvable models, is apparently not expressible in terms of linear integral equations

  2. Exploring inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video

    Science.gov (United States)

    Li, Jia; Tian, Yonghong; Gao, Wen

    2008-01-01

    In recent years, the amount of streaming video has grown rapidly on the Web. Often, retrieving these streaming videos offers the challenge of indexing and analyzing the media in real time because the streams must be treated as effectively infinite in length, thus precluding offline processing. Generally speaking, captions are important semantic clues for video indexing and retrieval. However, existing caption detection methods often have difficulties to make real-time detection for streaming video, and few of them concern on the differentiation of captions from scene texts and scrolling texts. In general, these texts have different roles in streaming video retrieval. To overcome these difficulties, this paper proposes a novel approach which explores the inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video. In our approach, the inter-frame correlation information is used to distinguish caption texts from scene texts and scrolling texts. Moreover, wavelet-domain Generalized Gaussian Models (GGMs) are utilized to automatically remove non-text regions from each frame and only keep caption regions for further processing. Experiment results show that our approach is able to offer real-time caption detection with high recall and low false alarm rate, and also can effectively discern caption texts from the other texts even in low resolutions.

  3. Correlators in tensor models from character calculus

    Directory of Open Access Journals (Sweden)

    A. Mironov

    2017-11-01

    Full Text Available We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.

  4. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  5. System Reliability Analysis Considering Correlation of Performances

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)

    2017-04-15

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  6. System Reliability Analysis Considering Correlation of Performances

    International Nuclear Information System (INIS)

    Kim, Saekyeol; Lee, Tae Hee; Lim, Woochul

    2017-01-01

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  7. Connecting single-stock assessment models through correlated survival

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2017-01-01

    times. We propose a simple alternative. In three case studies each with two stocks, we improve the single-stock models, as measured by Akaike information criterion, by adding correlation in the cohort survival. To limit the number of parameters, the correlations are parameterized through...... the corresponding partial correlations. We consider six models where the partial correlation matrix between stocks follows a band structure ranging from independent assessments to complex correlation structures. Further, a simulation study illustrates the importance of handling correlated data sufficiently...... by investigating the coverage of confidence intervals for estimated fishing mortality. The results presented will allow managers to evaluate stock statuses based on a more accurate evaluation of model output uncertainty. The methods are directly implementable for stocks with an analytical assessment and do...

  8. Strength and deformability of hollow concrete blocks: correlation of block and cylindrical sample test results

    Directory of Open Access Journals (Sweden)

    C. S. Barbosa

    Full Text Available This paper deals with correlations among mechanical properties of hollow blocks and those of concrete used to make them. Concrete hollow blocks and test samples were moulded with plastic consistency concrete, to assure the same material in all cases, in three diferente levels of strength (nominally 10 N/mm², 20 N/mm² and 30 N/mm². The mechanical properties and structural behaviour in axial compression and tension tests were determined by standard tests in blocks and cylinders. Stress and strain analyses were made based on concrete’s modulus of elasticity obtained in the sample tests as well as on measured strain in the blocks’ face-shells and webs. A peculiar stress-strain analysis, based on the superposition of effects, provided an estimation of the block load capacity based on its deformations. In addition, a tentative method to preview the block deformability from the concrete mechanical properties is described and tested. This analysis is a part of a broader research that aims to support a detailed structural analysis of blocks, prisms and masonry constructions.

  9. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    Science.gov (United States)

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  10. Testing of technology readiness index model based on exploratory factor analysis approach

    Science.gov (United States)

    Ariani, AF; Napitupulu, D.; Jati, RK; Kadar, JA; Syafrullah, M.

    2018-04-01

    SMEs readiness in using ICT will determine the adoption of ICT in the future. This study aims to evaluate the model of technology readiness in order to apply the technology on SMEs. The model is tested to find if TRI model is relevant to measure ICT adoption, especially for SMEs in Indonesia. The research method used in this paper is survey to a group of SMEs in South Tangerang. The survey measures the readiness to adopt ICT based on four variables which is Optimism, Innovativeness, Discomfort, and Insecurity. Each variable contains several indicators to make sure the variable is measured thoroughly. The data collected through survey is analysed using factor analysis methodwith the help of SPSS software. The result of this study shows that TRI model gives more descendants on some indicators and variables. This result can be caused by SMEs owners’ knowledge is not homogeneous about either the technology that they are used, knowledge or the type of their business.

  11. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    Science.gov (United States)

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (Pdonation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  12. The Leeb Hardness Test for Rock: An Updated Methodology and UCS Correlation

    Science.gov (United States)

    Corkum, A. G.; Asiri, Y.; El Naggar, H.; Kinakin, D.

    2018-03-01

    The Leeb hardness test (LHT with test value of L D ) is a rebound hardness test, originally developed for metals, that has been correlated with the Unconfined Compressive Strength (test value of σ c ) of rock by several authors. The tests can be carried out rapidly, conveniently and nondestructively on core and block samples or on rock outcrops. This makes the relatively small LHT device convenient for field tests. The present study compiles test data from literature sources and presents new laboratory testing carried out by the authors to develop a substantially expanded database with wide-ranging rock types. In addition, the number of impacts that should be averaged to comprise a "test result" was revisited along with the issue of test specimen size. Correlation for L D and σ c for various rock types is provided along with recommended testing methodology. The accuracy of correlated σ c estimates was assessed and reasonable correlations were observed between L D and σ c . The study findings show that LHT can be useful particularly for field estimation of σ c and offers a significant improvement over the conventional field estimation methods outlined by the ISRM (e.g., hammer blows). This test is rapid and simple, with relatively low equipment costs, and provides a reasonably accurate estimate of σ c .

  13. Derivation and application of mathematical model for well test analysis with variable skin factor in hydrocarbon reservoirs

    Directory of Open Access Journals (Sweden)

    Pengcheng Liu

    2016-06-01

    Full Text Available Skin factor is often regarded as a constant in most of the mathematical model for well test analysis in oilfields, but this is only a kind of simplified treatment with the actual skin factor changeable. This paper defined the average permeability of a damaged area as a function of time by using the definition of skin factor. Therefore a relationship between a variable skin factor and time was established. The variable skin factor derived was introduced into existing traditional models rather than using a constant skin factor, then, this newly derived mathematical model for well test analysis considering variable skin factor was solved by Laplace transform. The dimensionless wellbore pressure and its derivative changed with dimensionless time were plotted with double logarithm and these plots can be used for type curve fitting. The effects of all the parameters in the expression of variable skin factor were analyzed based on the dimensionless wellbore pressure and its derivative. Finally, actual well testing data were used to fit the type curves developed which validates the applicability of the mathematical model from Sheng-2 Block, Shengli Oilfield, China.

  14. RAYLEIGH SCATTERING MODELS WITH CORRELATION INTEGRAL

    Directory of Open Access Journals (Sweden)

    S. F. Kolomiets

    2014-01-01

    Full Text Available This article offers one of possible approaches to the use of the classical correlation concept in Rayleigh scattering models. Classical correlation in contrast to three types of correlations corresponding to stochastic point flows opens the door to the efficient explanation of the interaction between periodical structure of incident radiation and discreet stochastic structure of distributed scatters typical for Rayleigh problems.

  15. Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for health care policy and decision-making.

    Science.gov (United States)

    Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J

    2018-07-01

    Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Damage correlation in theory and practice

    International Nuclear Information System (INIS)

    Doran, D.G.; Odette, G.R.; Simons, R.L.; Mansur, L.K.

    1977-01-01

    Common to all reactor development work is the problem of differences between the irradiation environments used for materials testing and those typical of service conditions. Efforts are being made to develop damage models that incorporate irradiation parameters such as type and energy of radiation, flux, and exposure. Models relating radiation damage production and microstructural evolution to changes in mechanical properties are primitive. Nevertheless, they suggest that the inability to account quantitatively for differences in test and service neutron spectra leads to overly conservative design of out-of-core components. Direct experimental corroboration is difficult because of the low neutron fluxes associated with the desired soft spectra. Further development of mechanistic models and new approaches to model testing are needed. Models of the growth stage of swelling, on the other hand, are relatively advanced. These models are discussed briefly as an example of how damage models can be used to help guide and analyze irradiation experiments. Accelerated damage studies using charged particles are expected to continue. Current empirical correlations of damage rates can be given a firmer theoretical basis as analysis of experiments and modeling of damage continue to improve. Damage correlation methodology practices in reactor design must necessarily follow different rules from that practiced in materials research and development. Nevertheless, decreasing the gap between them is a laudable objective with potentially significant economic impact

  17. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    Directory of Open Access Journals (Sweden)

    Mingwu Jin

    2012-01-01

    Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  18. Pre-test analysis results of a PWR steel lined pre-stressed concrete containment model

    International Nuclear Information System (INIS)

    Basha, S.M.; Ghosh, Barnali; Patnaik, R.; Ramanujam, S.; Singh, R.K.; Kushwaha, H.S.; Venkat Raj, V.

    2000-02-01

    Pre-stressed concrete nuclear containment serves as the ultimate barrier against the release of radioactivity to the environment. This ultimate barrier must be checked for its ultimate load carrying capacity. BARC participated in a Round Robin analysis activity which is co-sponsored by Sandia National Laboratory, USA and Nuclear Power Engineering Corporation Japan for the pre-test prediction of a 1:4 size Pre-stressed Concrete Containment Vessel. In house finite element code ULCA was used to make the test predictions of displacements and strains at the standard output locations. The present report focuses on the important landmarks of the pre-test results, in sequential terms of first crack appearance, loss of pre-stress, first through thickness crack, rebar and liner yielding and finally liner tearing at the ultimate load. Global and local failure modes of the containment have been obtained from the analysis. Finally sensitivity of the numerical results with respect to different types of liners and different constitutive models in terms of bond strength between concrete and steel and tension-stiffening parameters are examined. The report highlights the important features which could be observed during the test and guidelines are given for improving the prediction in the post test computation after the test data is available. (author)

  19. From micro-correlations to macro-correlations

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2016-01-01

    Random vectors with a symmetric correlation structure share a common value of pair-wise correlation between their different components. The symmetric correlation structure appears in a multitude of settings, e.g. mixture models. In a mixture model the components of the random vector are drawn independently from a general probability distribution that is determined by an underlying parameter, and the parameter itself is randomized. In this paper we study the overall correlation of high-dimensional random vectors with a symmetric correlation structure. Considering such a random vector, and terming its pair-wise correlation “micro-correlation”, we use an asymptotic analysis to derive the random vector’s “macro-correlation” : a score that takes values in the unit interval, and that quantifies the random vector’s overall correlation. The method of obtaining macro-correlations from micro-correlations is then applied to a diverse collection of frameworks that demonstrate the method’s wide applicability.

  20. Testing and reference model analysis of FTTH system

    Science.gov (United States)

    Feng, Xiancheng; Cui, Wanlong; Chen, Ying

    2009-08-01

    With rapid development of Internet and broadband access network, the technologies of xDSL, FTTx+LAN , WLAN have more applications, new network service emerges in endless stream, especially the increase of network game, meeting TV, video on demand, etc. FTTH supports all present and future service with enormous bandwidth, including traditional telecommunication service, traditional data service and traditional TV service, and the future digital TV and VOD. With huge bandwidth of FTTH, it wins the final solution of broadband network, becomes the final goal of development of optical access network.. Fiber to the Home (FTTH) will be the goal of telecommunications cable broadband access. In accordance with the development trend of telecommunication services, to enhance the capacity of integrated access network, to achieve triple-play (voice, data, image), based on the existing optical Fiber to the curb (FTTC), Fiber To The Zone (FTTZ), Fiber to the Building (FTTB) user optical cable network, the optical fiber can extend to the FTTH system of end-user by using EPON technology. The article first introduced the basic components of FTTH system; and then explain the reference model and reference point for testing of the FTTH system; Finally, by testing connection diagram, the testing process, expected results, primarily analyze SNI Interface Testing, PON interface testing, Ethernet performance testing, UNI interface testing, Ethernet functional testing, PON functional testing, equipment functional testing, telephone functional testing, operational support capability testing and so on testing of FTTH system. ...

  1. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  2. Correlation between two-dimensional video analysis and subjective assessment in evaluating knee control among elite female team handball players

    DEFF Research Database (Denmark)

    Stensrud, Silje; Myklebust, Grethe; Kristianslund, Eirik

    2011-01-01

    . The present study investigated the correlation between a two-dimensional (2D) video analysis and subjective assessment performed by one physiotherapist in evaluating knee control. We also tested the correlation between three simple clinical tests using both methods. A cohort of 186 female elite team handball...

  3. Post-test analysis for the MIDAS DVI tests using MARS

    International Nuclear Information System (INIS)

    Bae, K. H.; Lee, Y. J.; Kwon, T. S.; Lee, W. J.; Kim, H. C.

    2002-01-01

    Various DVI tests have been performed at MIDAS test facility which is a scaled facility of APR1400 applying a modified linear scale ratio. The evaluation results for the various void height tests and direct bypass tests using a multi-dimensional best-estimate analysis code MARS, show that; (a) MARS code has an advanced modeling capability of well predicting major multi-dimensional thermal-hydraulic phenomena occurring in the downcomer, (b) MARS code under-predicts the steam condensation rates, which in turn causes to over-predict the ECC bypass rates. However, the trend of decrease in steam condensation rate and increase in ECC bypass rate in accordance with the increase in steam flow rate, and the calculation results of the ECC bypass rates under the EM analysis conditions generally agree with the test data

  4. Population models and simulation methods: The case of the Spearman rank correlation.

    Science.gov (United States)

    Astivia, Oscar L Olvera; Zumbo, Bruno D

    2017-11-01

    The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.

  5. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACA

    2011-12-01

    Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

  6. Use of Quality Models and Indicators for Evaluating Test Quality in an ESP Course

    Directory of Open Access Journals (Sweden)

    IEVA RUDZINSKA

    2013-12-01

    Full Text Available Qualitative methods of assessment play a decisive role in education in general and in language learning in particular. The necessity to perform a qualitative assessment comes from both increased student competition in higher education institutions (HEIs, and hence higher demands for fair assessment, and a growing public awareness on higher education issues, and therefore the need to account for a wider circle of stakeholders, including society as a whole. The aim of the present paper is to study the regulations and laws pertaining to the issue of assessment in Latvian HEIs, as well as to carry out literature sources analysis about assessment in language testing, seeking to select criteria characterizing the quality of English for Specific Purposes (ESP tests and to apply the model of evaluating the quality of a language test on an example of a test in sport English, developed in a Latvian higher education institution. An analysis of the regulations and laws about assessment in higher education and literature sources about tests in language courses has enabled the development of a test quality model, consisting of seven intrinsic quality criteria: clarity, adequacy, deep approach, attractiveness, originality/similarity, orientation towards student learning result/process, test scoring objectivity/subjectivity. Quality criteria comprise eleven indicators. The reliability of the given model is evaluated by means of the whole model, its criteria and indicator Cronbach’s alphas and point-biserial (item-total correlations or discrimination indexes DI. The test was taken by 63 participants, all of them 2nd year full time students attending a Latvian higher education institution. A statistical data analysis was performed with SPSS 17.0. The results show that, although test adequacy and clarity is sufficiently high, attractiveness and deep approach should be improved. Also the reliability of one version of the test is higher than that of the other one

  7. Detrended fluctuation analysis made flexible to detect range of cross-correlated fluctuations

    Science.gov (United States)

    Kwapień, Jarosław; Oświecimka, Paweł; DroŻdŻ, Stanisław

    2015-11-01

    The detrended cross-correlation coefficient ρDCCA has recently been proposed to quantify the strength of cross-correlations on different temporal scales in bivariate, nonstationary time series. It is based on the detrended cross-correlation and detrended fluctuation analyses (DCCA and DFA, respectively) and can be viewed as an analog of the Pearson coefficient in the case of the fluctuation analysis. The coefficient ρDCCA works well in many practical situations but by construction its applicability is limited to detection of whether two signals are generally cross-correlated, without the possibility to obtain information on the amplitude of fluctuations that are responsible for those cross-correlations. In order to introduce some related flexibility, here we propose an extension of ρDCCA that exploits the multifractal versions of DFA and DCCA: multifractal detrended fluctuation analysis and multifractal detrended cross-correlation analysis, respectively. The resulting new coefficient ρq not only is able to quantify the strength of correlations but also allows one to identify the range of detrended fluctuation amplitudes that are correlated in two signals under study. We show how the coefficient ρq works in practical situations by applying it to stochastic time series representing processes with long memory: autoregressive and multiplicative ones. Such processes are often used to model signals recorded from complex systems and complex physical phenomena like turbulence, so we are convinced that this new measure can successfully be applied in time-series analysis. In particular, we present an example of such application to highly complex empirical data from financial markets. The present formulation can straightforwardly be extended to multivariate data in terms of the q -dependent counterpart of the correlation matrices and then to the network representation.

  8. A learning algorithm for adaptive canonical correlation analysis of several data sets.

    Science.gov (United States)

    Vía, Javier; Santamaría, Ignacio; Pérez, Jesús

    2007-01-01

    Canonical correlation analysis (CCA) is a classical tool in statistical analysis to find the projections that maximize the correlation between two data sets. In this work we propose a generalization of CCA to several data sets, which is shown to be equivalent to the classical maximum variance (MAXVAR) generalization proposed by Kettenring. The reformulation of this generalization as a set of coupled least squares regression problems is exploited to develop a neural structure for CCA. In particular, the proposed CCA model is a two layer feedforward neural network with lateral connections in the output layer to achieve the simultaneous extraction of all the CCA eigenvectors through deflation. The CCA neural model is trained using a recursive least squares (RLS) algorithm. Finally, the convergence of the proposed learning rule is proved by means of stochastic approximation techniques and their performance is analyzed through simulations.

  9. Correlations of Platooning Track Test and Wind Tunnel Data

    Energy Technology Data Exchange (ETDEWEB)

    Lammert, Michael P. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kelly, Kenneth J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Yanowitz, Janet [Ecoengineering, Sharonville, OH (United States)

    2018-02-02

    In this report, the National Renewable Energy Laboratory analyzed results from multiple, independent truck platooning projects to compare and contrast track test results with wind tunnel test results conducted by Lawrence Livermore National Laboratory (LLNL). Some highlights from the report include compiled data, and results from four independent SAE J1321 full-size track test campaigns that were compared to LLNL wind tunnel testing results. All platooning scenarios tested demonstrated significant fuel savings with good correlation relative to following distances, but there are still unanswered questions and clear opportunities for system optimization. NOx emissions showed improvements from NREL tests in 2014 to Auburn tests in 2015 with respect to J1321 platooning track testing of Peloton system. NREL evaluated data from Volpe's Naturalistic Study of Truck Following Behavior, which showed minimal impact of naturalistic background platooning. We found significant correlation between multiple track studies, wind tunnel tests, and computational fluid dynamics, but also showed that there is more to learn regarding close formation and longer-distance effects. We also identified potential areas for further research and development, including development of advanced aerodynamic designs optimized for platooning, measurement of platoon system performance in traffic conditions, impact of vehicle lateral offsets on platooning performance, and characterization of the national potential for platooning based on fleet operational characteristics.

  10. Prospects of Frequency-Time Correlation Analysis for Detecting Pipeline Leaks by Acoustic Emission Method

    International Nuclear Information System (INIS)

    Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E

    2014-01-01

    In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit

  11. Interactive Correlation Analysis and Visualization of Climate Data

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Kwan-Liu [Univ. of California, Davis, CA (United States)

    2016-09-21

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods for visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.

  12. Development of realistic thermal-hydraulic system analysis codes ; development of thermal hydraulic test requirements for multidimensional flow modeling

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Kune Yull; Yoon, Sang Hyuk; Noh, Sang Woo; Lee, Il Suk [Seoul National University, Seoul (Korea)

    2002-03-01

    This study is concerned with developing a multidimensional flow model required for the system analysis code MARS to more mechanistically simulate a variety of thermal hydraulic phenomena in the nuclear stem supply system. The capability of the MARS code as a thermal hydraulic analysis tool for optimized system design can be expanded by improving the current calculational methods and adding new models. In this study the relevant literature was surveyed on the multidimensional flow models that may potentially be applied to the multidimensional analysis code. Research items were critically reviewed and suggested to better predict the multidimensional thermal hydraulic behavior and to identify test requirements. A small-scale preliminary test was performed in the downcomer formed by two vertical plates to analyze multidimensional flow pattern in a simple geometry. The experimental result may be applied to the code for analysis of the fluid impingement to the reactor downcomer wall. Also, data were collected to find out the controlling parameters for the one-dimensional and multidimensional flow behavior. 22 refs., 40 figs., 7 tabs. (Author)

  13. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    Science.gov (United States)

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the

  14. Correlates of Unwanted Births in Bangladesh: A Study through Path Analysis.

    Science.gov (United States)

    Roy, Tapan Kumar; Singh, Brijesh P

    2016-01-01

    Unwanted birth is an important public health concern due to its negative association with adverse outcomes of mothers and children as well as socioeconomic development of a country. Although a number of studies have been investigated the determinants of unwanted births through logistic regression analysis, an extensive assessment using path model is lacking. In the current study, we applied path analysis to know the important covariates for unwanted births in Bangladesh. The study used data extracted from Bangladesh Demographic and Health Survey (BDHS) 2011. It considered sub-sample consisted of 7,972 women who had given most recent births five years preceding the date of interview or who were currently pregnant at survey time. Correlation analysis was used to find out the significant association with unwanted births. This study provided the factors affecting unwanted births in Bangladesh. The path model was used to determine the direct, indirect and total effects of socio-demographic factors on unwanted births. The result exhibited that more than one-tenth of the recent births were unwanted in Bangladesh. The differentials of unwanted births were women's age, education, age at marriage, religion, socioeconomic status, exposure of mass-media and use of family planning. In correlation analysis, it showed that unwanted births were positively correlated with women age and place of residence and these relationships were significant. On the contrary, unwanted births were inversely significantly correlated with education and social status. The total effects of endogenous variables such as women age, place of residence and use of family planning methods had favorable effect on unwanted births. Policymakers and program planners need to design programs and services carefully to reduce unwanted births in Bangladesh, especially, service should focus on helping those groups of women who were identified in the analysis as being at increased risks of unwanted births- older women

  15. Correlates of Unwanted Births in Bangladesh: A Study through Path Analysis.

    Directory of Open Access Journals (Sweden)

    Tapan Kumar Roy

    Full Text Available Unwanted birth is an important public health concern due to its negative association with adverse outcomes of mothers and children as well as socioeconomic development of a country. Although a number of studies have been investigated the determinants of unwanted births through logistic regression analysis, an extensive assessment using path model is lacking. In the current study, we applied path analysis to know the important covariates for unwanted births in Bangladesh.The study used data extracted from Bangladesh Demographic and Health Survey (BDHS 2011. It considered sub-sample consisted of 7,972 women who had given most recent births five years preceding the date of interview or who were currently pregnant at survey time. Correlation analysis was used to find out the significant association with unwanted births. This study provided the factors affecting unwanted births in Bangladesh. The path model was used to determine the direct, indirect and total effects of socio-demographic factors on unwanted births.The result exhibited that more than one-tenth of the recent births were unwanted in Bangladesh. The differentials of unwanted births were women's age, education, age at marriage, religion, socioeconomic status, exposure of mass-media and use of family planning. In correlation analysis, it showed that unwanted births were positively correlated with women age and place of residence and these relationships were significant. On the contrary, unwanted births were inversely significantly correlated with education and social status. The total effects of endogenous variables such as women age, place of residence and use of family planning methods had favorable effect on unwanted births.Policymakers and program planners need to design programs and services carefully to reduce unwanted births in Bangladesh, especially, service should focus on helping those groups of women who were identified in the analysis as being at increased risks of unwanted

  16. Correlation between chronic arthritis patients confirmed with questionnaire and serologic test of Lyme disease

    Science.gov (United States)

    Rotan, H.; Ginting, Y.; Loesnihari, R.; Kembaren, T.; Marpaung, B.

    2018-03-01

    Lyme borreliosis is the most common tick-borne disease, and frequency of arthritis complication later. The objective of this study was to determine the seroprevalence of Lyme disease and to evaluate its correlation with chronic arthritis. This epidemiologic cross sectional study included 41 healthy individuals who had chronic arthritis and bitten by ticks underwent questionnaires, and laboratory tests consisted of a routine blood sample, serum uric acid, and IgG ELISA for Lyme. There was 7.32% presence of positive IgG for Lyme. Samples with positive IgG for Lyme were further evaluated for rheumatology marker. We found three samples with a positive rheumatoid factor, two samples had positive anti-MCV, and 1 sample had slightly increased CRP. Three Lyme positive samples had normal EULAR scoring. It was the first Lyme disease case found in Indonesia, particularly in 4 villages of Sibolangit, Deli Serdang, North Sumatera. The assessment made by analysis the questionnaire, evaluation the blood test, and confirmed positive Lyme disease, and at last, we found the correlation between chronic arthritis with positive test Lyme.

  17. Canonical correlation analysis of synchronous neural interactions and cognitive deficits in Alzheimer's dementia

    Science.gov (United States)

    Karageorgiou, Elissaios; Lewis, Scott M.; Riley McCarten, J.; Leuthold, Arthur C.; Hemmy, Laura S.; McPherson, Susan E.; Rottunda, Susan J.; Rubins, David M.; Georgopoulos, Apostolos P.

    2012-10-01

    In previous work (Georgopoulos et al 2007 J. Neural Eng. 4 349-55) we reported on the use of magnetoencephalographic (MEG) synchronous neural interactions (SNI) as a functional biomarker in Alzheimer's dementia (AD) diagnosis. Here we report on the application of canonical correlation analysis to investigate the relations between SNI and cognitive neuropsychological (NP) domains in AD patients. First, we performed individual correlations between each SNI and each NP, which provided an initial link between SNI and specific cognitive tests. Next, we performed factor analysis on each set, followed by a canonical correlation analysis between the derived SNI and NP factors. This last analysis optimally associated the entire MEG signal with cognitive function. The results revealed that SNI as a whole were mostly associated with memory and language, and, slightly less, executive function, processing speed and visuospatial abilities, thus differentiating functions subserved by the frontoparietal and the temporal cortices. These findings provide a direct interpretation of the information carried by the SNI and set the basis for identifying specific neural disease phenotypes according to cognitive deficits.

  18. Analysis, scale modeling, and full-scale test of a railcar and spent-nuclear-fuel shipping cask in a high-velocity impact against a rigid barrier

    International Nuclear Information System (INIS)

    Huerta, M.

    1981-06-01

    This report describes the mathematical analysis, the physical scale modeling, and a full-scale crash test of a railcar spent-nuclear-fuel shipping system. The mathematical analysis utilized a lumped-parameter model to predict the structural response of the railcar and the shipping cask. The physical scale modeling analysis consisted of two crash tests that used 1/8-scale models to assess railcar and shipping cask damage. The full-scale crash test, conducted with retired railcar equipment, was carefully monitored with onboard instrumentation and high-speed photography. Results of the mathematical and scale modeling analyses are compared with the full-scale test. 29 figures

  19. Tensile strength of concrete under static and intermediate strain rates: Correlated results from different testing methods

    International Nuclear Information System (INIS)

    Wu Shengxing; Chen Xudong; Zhou Jikai

    2012-01-01

    Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.

  20. Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models

    Directory of Open Access Journals (Sweden)

    Bao Sheng Loe

    2018-04-01

    Full Text Available This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG. The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource1 short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s (LLTM were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties.

  1. Correlation Based Testing for Passive Sonar Picture Rationalization

    National Research Council Canada - National Science Library

    Mellema, Garfield R

    2007-01-01

    .... The sample correlation coefficient, is a statistical measure of relatedness. This paper describes the application of a test based on that measure to compare tracks produced by a probabilistic data association filter from a set of towed array sonar data. Keywords.

  2. Formability models for warm sheet metal forming analysis

    Science.gov (United States)

    Jiang, Sen

    Several closed form models for the prediction of strain space sheet metal formability as a function of temperature and strain rate are proposed. The proposed models require only failure strain information from the uniaxial tension test at an elevated temperature setting and failure strain information from the traditionally defined strain space forming limit diagram at room temperature, thereby featuring the advantage of offering a full forming limit description without having to carry out expensive experimental studies for multiple modes of deformation under the elevated temperature. The Power law, Voce, and Johnson-Cook hardening models are considered along with the yield criterions of Hill's 48 and Logan-Hosford yield criteria. Acceptable correlations between the theory and experiment are reported for all the models under a plane strain condition. Among all the proposed models, the model featuring Johnson-Cook hardening model and Logan-Hosford yield behavior (LHJC model) was shown to best correlate with experiment. The sensitivity of the model with respect to various forming parameters is discussed. This work is significant to those aiming to incorporate closed-form formability models directly into numerical simulation programs for the purpose of design and analysis of products manufactured through the warm sheet metal forming process. An improvement based upon Swift's diffuse necking theory, is suggested in order to enhance the reliability of the model for biaxial stretch conditions. Theory relating to this improvement is provided in Appendix B.

  3. Correlation functions of two-matrix models

    International Nuclear Information System (INIS)

    Bonora, L.; Xiong, C.S.

    1993-11-01

    We show how to calculate correlation functions of two matrix models without any approximation technique (except for genus expansion). In particular we do not use any continuum limit technique. This allows us to find many solutions which are invisible to the latter technique. To reach our goal we make full use of the integrable hierarchies and their reductions which were shown in previous papers to naturally appear in multi-matrix models. The second ingredient we use, even though to a lesser extent, are the W-constraints. In fact an explicit solution of the relevant hierarchy, satisfying the W-constraints (string equation), underlies the explicit calculation of the correlation functions. The correlation functions we compute lend themselves to a possible interpretation in terms of topological field theories. (orig.)

  4. HIV Testing in Recent College Students: Prevalence and Correlates

    Science.gov (United States)

    Caldeira, Kimberly M.; Singer, Barbara J.; O'Grady, Kevin E.; Vincent, Kathryn B.; Arria, Amelia M.

    2012-01-01

    Prevalence and correlates of HIV testing were examined in a sample of 957 unmarried recent college students in the United States. Participants were asked about HIV testing, past-six-months sexual activities, lifetime treatment for sexually transmitted infections (STI), past-year health service utilization, and DSM-IV criteria for alcohol and other…

  5. Interpretation and modeling of a subsurface injection test, 200 East Area, Hanford, Washington

    International Nuclear Information System (INIS)

    Smoot, J.L.; Lu, A.H.

    1994-11-01

    A tracer experiment was conducted in 1980 and 1981 in the unsaturated zone in the southeast portion of the Hanford 200 East Area near the Plutonium-Uranium Extraction (PUREX) facility. The field design consisted of a central injection well with 32 monitoring wells within an 8-m radius. Water containing radioactive and other tracers was injected weekly during the experiment. The unique features of the experiment were the documented control of the inputs, the experiment's three-dimensional nature, the in-situ measurement of radioactive tracers, and the use of multiple injections. The spacing of the test wells provided reasonable lag distribution for spatial correlation analysis. Preliminary analyses indicated spatial correlation on the order of 400 to 500 cm in the vertical direction. Previous researchers found that two-dimensional axisymmetric modeling of moisture content generally underpredicts lateral spreading and overpredicts vertical movement of the injected water. Incorporation of anisotropic hydraulic properties resulted in the best model predictions. Three-dimensional modeling incorporated the geologic heterogeneity of discontinuous layers and lenses of sediment apparent in the site geology. Model results were compared statistically with measured experimental data and indicate reasonably good agreement with vertical and lateral field moisture distributions

  6. [Correlation and concordance between the national test of medicine (ENAM) and the grade point average (GPA): analysis of the peruvian experience in the period 2007 - 2009].

    Science.gov (United States)

    Huamaní, Charles; Gutiérrez, César; Mezones-Holguín, Edward

    2011-03-01

    To evaluate the correlation and concordance between the 'Peruvian National Exam of Medicine' (ENAM) and the Mean Grade Point Average (GPA) in recently graduated medical students in the period 2007 to 2009. We carried out a secondary data analysis, using the records of the physicians applying to the Rural and Urban Marginal Service in Health of Peru (SERUMS) processes for the years 2008 to 2010. We extracted from these registers, the grades obtained in the ENAM and GPA. We performed a descriptive analysis using medians and 1st and 3rd quartiles (q1/q3); we calculated the correlation between both scores using the Spearman correlation coefficient, additionally, we conducted a lineal regression analysis, and the concordance was measured using the Bland and Altman coefficient. A total of 6 117 physicians were included, the overall median for the GPA was 13.4 (12.7/14.2) and for the ENAM was 11.6 (10.2/13.0).Of the total assessed, 36.8% failed the TEST. We observed an increase in annual median of ENAM scores, with the consequent decrease in the difference between both grades. The correlation between ENAM and PPU is direct and moderate (0.582), independent from the year, type of university management (Public or Private) and location. However, the concordance between both ratings is regular, with a global coefficient of 0.272 (CI 95%: 0.260 to 0.284). Independently of the year, location or type of university management, there is a moderate correlation between the ENAM and the PPU; however, there is only a regular concordance between both grades.

  7. Analysis of Baryon Angular Correlations with Pythia

    CERN Document Server

    Mccune, Amara

    2017-01-01

    Our current understanding of baryon production is encompassed in the framework of the Lund String Fragmentation Model, which is then encoded in the Monte Carlo event generator program Pythia. In proton-proton collisions, daughter particles of the same baryon number produce an anti-correlation in $\\Delta\\eta\\Delta\\varphi$ space in ALICE data, while Pythia programs predict a correlation. To understand this unusual effect, where it comes from, and where our models of baryon production go wrong, correlation functions were systematically generated with Pythia. Effects of energy scaling, color reconnection, and popcorn parameters were investigated.

  8. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  9. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  10. Bi-directional gene set enrichment and canonical correlation analysis identify key diet-sensitive pathways and biomarkers of metabolic syndrome

    Directory of Open Access Journals (Sweden)

    Gaora Peadar Ó

    2010-10-01

    Full Text Available Abstract Background Currently, a number of bioinformatics methods are available to generate appropriate lists of genes from a microarray experiment. While these lists represent an accurate primary analysis of the data, fewer options exist to contextualise those lists. The development and validation of such methods is crucial to the wider application of microarray technology in the clinical setting. Two key challenges in clinical bioinformatics involve appropriate statistical modelling of dynamic transcriptomic changes, and extraction of clinically relevant meaning from very large datasets. Results Here, we apply an approach to gene set enrichment analysis that allows for detection of bi-directional enrichment within a gene set. Furthermore, we apply canonical correlation analysis and Fisher's exact test, using plasma marker data with known clinical relevance to aid identification of the most important gene and pathway changes in our transcriptomic dataset. After a 28-day dietary intervention with high-CLA beef, a range of plasma markers indicated a marked improvement in the metabolic health of genetically obese mice. Tissue transcriptomic profiles indicated that the effects were most dramatic in liver (1270 genes significantly changed; p Conclusion Bi-directional gene set enrichment analysis more accurately reflects dynamic regulatory behaviour in biochemical pathways, and as such highlighted biologically relevant changes that were not detected using a traditional approach. In such cases where transcriptomic response to treatment is exceptionally large, canonical correlation analysis in conjunction with Fisher's exact test highlights the subset of pathways showing strongest correlation with the clinical markers of interest. In this case, we have identified selenoamino acid metabolism and steroid biosynthesis as key pathways mediating the observed relationship between metabolic health and high-CLA beef. These results indicate that this type of

  11. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  12. Two-dimensional vertical model seismic test and analysis for HTGR core

    International Nuclear Information System (INIS)

    Ikushima, Takeshi; Honma, Toshiaki.

    1983-02-01

    The resistance against earthquakes of high-temperature gas cooled reactor (HTGR) core with block-type fuels is not fully ascertained yet. Seismic studies must be made if such a reactor plant is to be installed in areas with frequent earthquakes. In the paper the test results of seismic behavior of a half-scale two-dimensional vertical slice core model and analysis are presented. The following results were obtained: (1) With soft spring support of the fixed side reflector structure, the relative column displacement is larger than that for hand support but the impact reaction force is smaller. (2) In the case of hard spring support the dowel force is smaller than for soft support. (3) The relative column displacement is larger in the core center than at the periphery. The impact acceleration (force) in the center is smaller than at the periphery. (4) The relative column displacement and impact reaction force are smaller with the gas pressure simulation spring than without. (5) With decreasing gap width between the top blocks of columns, the relative column displacement and impact reaction force decrease. (6) The column damping ratio was estimated as 4 -- 10% of critical. (7) The maximum impact reaction force for random waves such as seismic was below 60% that for a sinusoidal wave. (8) Vibration behavior and impact response are in good agreement between test and analysis. (author)

  13. Nuclear Test Depth Determination with Synthetic Modelling: Global Analysis from PNEs to DPRK-2016

    Science.gov (United States)

    Rozhkov, Mikhail; Stachnik, Joshua; Baker, Ben; Epiphansky, Alexey; Bobrov, Dmitry

    2016-04-01

    Seismic event depth determination is critical for the event screening process at the International Data Center, CTBTO. A thorough determination of the event depth can be conducted mostly through additional special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface making the depth screening criterion not applicable. Further it may result in a heavier workload to manually distinguish between subsurface and deeper crustal events. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the depth phases, cross correlation between observed and theoretic seismograms can provide a basis for the event depth estimation, and so an expansion to the screening process. We applied this approach mostly to events at teleseismic and partially regional distances. The approach was found efficient for the seismic event screening process, with certain caveats related mostly to poorly defined source and receiver crustal models which can shift the depth estimate. An adjustable teleseismic attenuation model (t*) for synthetics was used since this characteristic is not known for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to account for the complex source topography. The software prototype is designed to be used for the Expert Technical Analysis at the IDC. With this, the design effectively reuses the NDC-in-a-Box code and can be comfortably utilized by the NDC users. The package uses Geotool as a front-end for data

  14. RELAP5/MOD2 models and correlations

    International Nuclear Information System (INIS)

    Dimenna, R.A.; Larson, J.R.; Johnson, R.W.; Larson, T.K.; Miller, C.S.; Streit, J.E.; Hanson, R.G.; Kiser, D.M.

    1988-08-01

    A review of the RELAP5/MOD2 computer code has been performed to assess the basis for the models and correlations comprising the code. The review has included verification of the original data base, including thermodynamic, thermal-hydraulic, and geothermal conditions; simplifying assumptions in implementation or application; and accuracy of implementation compared to documented descriptions of each of the models. An effort has been made to provide the reader with an understanding of what is in the code and why it is there and to provide enough information that an analyst can assess the impact of the correlation or model on the ability of the code to represent the physics of a reactor transient. Where assessment of the implemented versions of the models or correlations has been accomplished and published, the assessment results have been included

  15. New modelling of transient well test and rate decline analysis for a horizontal well in a multiple-zone reservoir

    International Nuclear Information System (INIS)

    Nie, Ren-Shi; Guo, Jian-Chun; Jia, Yong-Lu; Zhu, Shui-Qiao; Rao, Zheng; Zhang, Chun-Guang

    2011-01-01

    The no-type curve with negative skin of a horizontal well has been found in the current research. Negative skin is very significant to transient well test and rate decline analysis. This paper first presents the negative skin problem where the type curves with negative skin of a horizontal well are oscillatory. In order to solve the problem, we propose a new model of transient well test and rate decline analysis for a horizontal well in a multiple-zone composite reservoir. A new dimensionless definition of r D is introduced in the dimensionless mathematical modelling under different boundaries. The model is solved using the Laplace transform and separation of variables techniques. In Laplace space, the solutions for both constant rate production and constant wellbore pressure production are expressed in a unified formula. We provide graphs and thorough analysis of the new standard type curves for both well test and rate decline analysis; the characteristics of type curves are the reflections of horizontal well production in a multiple-zone reservoir. An important contribution of our paper is that our model removed the oscillation in type curves and thus solved the negative skin problem. We also show that the characteristics of type curves depend heavily on the properties of different zones, skin factor, well length, formation thickness, etc. Our research can be applied to a real case study

  16. Correlation of accelerometry with clinical balance tests in older fallers and non-fallers.

    LENUS (Irish Health Repository)

    O'Sullivan, Maura

    2012-02-01

    BACKGROUND: falls are a common cause of injury and decreased functional independence in the older adult. Diagnosis and treatment of fallers require tools that accurately assess physiological parameters associated with balance. Validated clinical tools include the Berg Balance Scale (BBS) and the Timed Up and Go test (TUG); however, the BBS tends to be subjective in nature, while the TUG quantifies an individuals functional impairment but requires further subjective evaluation for balance assessment. Other quantitative alternatives to date require expensive, sophisticated equipment. Measurement of the acceleration of centre of mass, with relatively inexpensive, lightweight, body-mounted accelerometers is a potential solution to this problem. OBJECTIVES: to determine (i) if accelerometry correlates with standard clinical tests (BBS and TUG), (ii) to characterise accelerometer responses to increasingly difficult challenges to balance and (iii) to characterise acceleration patterns between fallers and non-fallers. Study design and setting: torso accelerations were measured at the level of L3 using a tri-axial accelerometer under four conditions; standing unsupported with eyes open (EO), eyes closed (EC) and on a mat with eyes open (MAT EO) and closed (MAT EC). Older patients (n = 21, 8 males, 13 females) with a mean age of 78 (SD +\\/- 7.6) years who attended a day hospital were recruited for this study. Patients were identified as fallers or non-fallers based on a comprehensive falls history. MEASUREMENTS: Spearman\\'s rank correlation analysis examined the relationship between acceleration root mean square (RMS) data and the BBS while Pearson\\'s correlation was used with TUG scores. Differences in accelerometer RMS between fallers and non-fallers and between test conditions were examined using t-test and non-parametric alternatives where appropriate. RESULTS: there was a stepwise increase in accelerometer RMS with increasing task complexity, and the accelerometer

  17. Estimate of uncertainties correlated and no correlated associated to performance tests of activity meters

    International Nuclear Information System (INIS)

    Sousa, C.H.S.; Teixeira, G.J.; Peixoto, J.G.P.

    2014-01-01

    Activimeters should undergo performance for verifying the functionality tests as technical recommendations. This study estimated the associated expanded uncertainties uncorrelated to the results conducted on three instruments, two detectors with ionization chamber and one with Geiger Mueller tubes. For this we used a standard reference source and screened certified by the National Institute of Technology and Standardization. The methodology of this research was based on the protocols listed in the technical document of the International Atomic Energy Agency. Later two quantities were correlated presenting real correlation and improving expanded uncertainty 3.7%. (author)

  18. Analysis and optimization of blood-testing procedures.

    NARCIS (Netherlands)

    Bar-Lev, S.K.; Boxma, O.J.; Perry, D.; Vastazos, L.P.

    2017-01-01

    This paper is devoted to the performance analysis and optimization of blood testing procedures. We present a queueing model of two queues in series, representing the two stages of a blood-testing procedure. Service (testing) in stage 1 is performed in batches, whereas it is done individually in

  19. Development of Vehicle Model Test for Road Loading Analysis of Sedan Model

    Science.gov (United States)

    Mohd Nor, M. K.; Noordin, A.; Ruzali, M. F. S.; Hussen, M. H.

    2016-11-01

    Simple Structural Surfaces (SSS) method is offered as a means of organizing the process for rationalizing the basic vehicle body structure load paths. The application of this simplified approach is highly beneficial in the design development of modern passenger car structure especially during the conceptual stage. In Malaysia, however, there is no real physical model of SSS available to gain considerable insight and understanding into the function of each major subassembly in the whole vehicle structures. Based on this motivation, a physical model of SSS for sedan model with the corresponding model vehicle tests of bending and torsion is proposed in this work. The proposed approach is relatively easy to understand as compared to Finite Element Method (FEM). The results show that the proposed vehicle model test is capable to show that satisfactory load paths can give a sufficient structural stiffness within the vehicle structure. It is clearly observed that the global bending stiffness reduce significantly when more panels are removed from a complete SSS model. It is identified that parcel shelf is an important subassembly to sustain bending load. The results also match with the theoretical hypothesis, as the stiffness of the structure in an open section condition is shown weak when subjected to torsion load compared to bending load. The proposed approach can potentially be integrated with FEM to speed up the design process of automotive vehicle.

  20. A new detrended semipartial cross-correlation analysis: Assessing the important meteorological factors affecting API

    International Nuclear Information System (INIS)

    Shen, Chen-Hua

    2015-01-01

    To analyze the unique contribution of meteorological factors to the air pollution index (API), a new method, the detrended semipartial cross-correlation analysis (DSPCCA), is proposed. Based on both a detrended cross-correlation analysis and a DFA-based multivariate-linear-regression (DMLR), this method is improved by including a semipartial correlation technique, which is used to indicate the unique contribution of an explanatory variable to multiple correlation coefficients. The advantages of this method in handling nonstationary time series are illustrated by numerical tests. To further demonstrate the utility of this method in environmental systems, new evidence of the primary contribution of meteorological factors to API is provided through DMLR. Results show that the most important meteorological factors affecting API are wind speed and diurnal temperature range, and the explanatory ability of meteorological factors to API gradually strengthens with increasing time scales. The results suggest that DSPCCA is a useful method for addressing environmental systems. - Highlights: • A detrended multiple linear regression is shown. • A detrended semipartial cross correlation analysis is proposed. • The important meteorological factors affecting API are assessed. • The explanatory ability of meteorological factors to API gradually strengthens with increasing time scales.

  1. A new detrended semipartial cross-correlation analysis: Assessing the important meteorological factors affecting API

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Chen-Hua, E-mail: shenandchen01@163.com [College of Geographical Science, Nanjing Normal University, Nanjing 210046 (China); Jiangsu Center for Collaborative Innovation in Geographical Information Resource, Nanjing 210046 (China); Key Laboratory of Virtual Geographic Environment of Ministry of Education, Nanjing 210046 (China)

    2015-12-04

    To analyze the unique contribution of meteorological factors to the air pollution index (API), a new method, the detrended semipartial cross-correlation analysis (DSPCCA), is proposed. Based on both a detrended cross-correlation analysis and a DFA-based multivariate-linear-regression (DMLR), this method is improved by including a semipartial correlation technique, which is used to indicate the unique contribution of an explanatory variable to multiple correlation coefficients. The advantages of this method in handling nonstationary time series are illustrated by numerical tests. To further demonstrate the utility of this method in environmental systems, new evidence of the primary contribution of meteorological factors to API is provided through DMLR. Results show that the most important meteorological factors affecting API are wind speed and diurnal temperature range, and the explanatory ability of meteorological factors to API gradually strengthens with increasing time scales. The results suggest that DSPCCA is a useful method for addressing environmental systems. - Highlights: • A detrended multiple linear regression is shown. • A detrended semipartial cross correlation analysis is proposed. • The important meteorological factors affecting API are assessed. • The explanatory ability of meteorological factors to API gradually strengthens with increasing time scales.

  2. Recommendations for analysis of repeated-measures designs: testing and correcting for sphericity and use of manova and mixed model analysis.

    Science.gov (United States)

    Armstrong, Richard A

    2017-09-01

    A common experimental design in ophthalmic research is the repeated-measures design in which at least one variable is a within-subject factor. This design is vulnerable to lack of 'sphericity' which assumes that the variances of the differences among all possible pairs of within-subject means are equal. Traditionally, this design has been analysed using a repeated-measures analysis of variance (RM-anova) but increasingly more complex methods such as multivariate anova (manova) and mixed model analysis (MMA) are being used. This article surveys current practice in the analysis of designs incorporating different factors in research articles published in three optometric journals, namely Ophthalmic and Physiological Optics (OPO), Optometry and Vision Science (OVS), and Clinical and Experimental Optometry (CXO), and provides advice to authors regarding the analysis of repeated-measures designs. Of the total sample of articles, 66% used a repeated-measures design. Of those articles using a repeated-measures design, 59% and 8% analysed the data using RM-anova or manova respectively and 33% used MMA. The use of MMA relative to RM-anova has increased significantly since 2009/10. A further search using terms to select those papers testing and correcting for sphericity ('Mauchly's test', 'Greenhouse-Geisser', 'Huynh and Feld') identified 66 articles, 62% of which were published from 2012 to the present. If the design is balanced without missing data then manova should be used rather than RM-anova as it gives better protection against lack of sphericity. If the design is unbalanced or with missing data then MMA is the method of choice. However, MMA is a more complex analysis and can be difficult to set up and run, and care should be taken first, to define appropriate models to be tested and second, to ensure that sample sizes are adequate. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  3. Multicollinearity in canonical correlation analysis in maize.

    Science.gov (United States)

    Alves, B M; Cargnelutti Filho, A; Burin, C

    2017-03-30

    The objective of this study was to evaluate the effects of multicollinearity under two methods of canonical correlation analysis (with and without elimination of variables) in maize (Zea mays L.) crop. Seventy-six maize genotypes were evaluated in three experiments, conducted in a randomized block design with three replications, during the 2009/2010 crop season. Eleven agronomic variables (number of days from sowing until female flowering, number of days from sowing until male flowering, plant height, ear insertion height, ear placement, number of plants, number of ears, ear index, ear weight, grain yield, and one thousand grain weight), 12 protein-nutritional variables (crude protein, lysine, methionine, cysteine, threonine, tryptophan, valine, isoleucine, leucine, phenylalanine, histidine, and arginine), and 6 energetic-nutritional variables (apparent metabolizable energy, apparent metabolizable energy corrected for nitrogen, ether extract, crude fiber, starch, and amylose) were measured. A phenotypic correlation matrix was first generated among the 29 variables for each of the experiments. A multicollinearity diagnosis was later performed within each group of variables using methodologies such as variance inflation factor and condition number. Canonical correlation analysis was then performed, with and without the elimination of variables, among groups of agronomic and protein-nutritional, and agronomic and energetic-nutritional variables. The canonical correlation analysis in the presence of multicollinearity (without elimination of variables) overestimates the variability of canonical coefficients. The elimination of variables is an efficient method to circumvent multicollinearity in canonical correlation analysis.

  4. Asset correlations and credit portfolio risk: an empirical analysis

    OpenAIRE

    Düllmann, Klaus; Scheicher, Martin; Schmieder, Christian

    2007-01-01

    In credit risk modelling, the correlation of unobservable asset returns is a crucial component for the measurement of portfolio risk. In this paper, we estimate asset correlations from monthly time series of Moody's KMV asset values for around 2,000 European firms from 1996 to 2004. We compare correlation and value-atrisk (VaR) estimates in a one-factor or market model and a multi-factor or sector model. Our main finding is a complex interaction of credit risk correlations and default probabi...

  5. Detrended cross-correlation analysis of electroencephalogram

    International Nuclear Information System (INIS)

    Wang Jun; Zhao Da-Qing

    2012-01-01

    In the paper we use detrended cross-correlation analysis (DCCA) to study the electroencephalograms of healthy young subjects and healthy old subjects. It is found that the cross-correlation between different leads of a healthy young subject is larger than that of a healthy old subject. It was shown that the cross-correlation relationship decreases with the aging process and the phenomenon can help to diagnose whether the subject's brain function is healthy or not. (interdisciplinary physics and related areas of science and technology)

  6. Correlation analysis of quantum fluctuations and repulsion effects of classical dynamics in SU(3) model

    International Nuclear Information System (INIS)

    Fujiwara, Shigeyasu; Sakata, Fumihiko

    2003-01-01

    In many quantum systems, random matrix theory has been used to characterize quantum level fluctuations, which is known to be a quantum correspondent to a regular-to-chaos transition in classical systems. We present a new qualitative analysis of quantum and classical fluctuation properties by exploiting correlation coefficients and variances. It is shown that the correlation coefficient of the quantum level density is roughly inversely proportional relation to the variance of consecutive phase-space point spacings on the Poincare section plane. (author)

  7. Linearized spectrum correlation analysis for line emission measurements.

    Science.gov (United States)

    Nishizawa, T; Nornberg, M D; Den Hartog, D J; Sarff, J S

    2017-08-01

    A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.

  8. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    Directory of Open Access Journals (Sweden)

    Magnezi R

    2016-12-01

    Full Text Available Racheli Magnezi,1 Asaf Hemi,1 Rina Hemi2 1Department of Management, Public Health and Health Systems Management Program, Bar Ilan University, Ramat Gan, 2Endocrine Service Unit, Sheba Medical Center, Tel Aviv, Israel Background: Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources.Methods: A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures.Results: A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN. For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1.Conclusion: This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. Keywords: failure mode

  9. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    Science.gov (United States)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Bose-Einstein correlation in the Lund model

    International Nuclear Information System (INIS)

    Anderson, B.

    1998-01-01

    I will present the Lund Model fragmentation in a somewhat different way than what is usually done. It is true that the formulas are derived from (semi-)classical probability arguments, but they can be motivated in a quantum mechanical setting and it is in particular possible to derive a transition matrix element. I will present two scenarios, one based upon Schwinger tunneling and one upon Wilson loop operators. The results will coincide and throw some light upon the sizes of the three main phenomenological parameters which occur in the Lund Model. After that I will show that in this way it is possible to obtain a model for the celebrated Bose-Einstein correlations between two bosons with small relative momenta. This model will exhibit non-trivial two- and three-particle BE correlations, influence the observed p-spectrum and finally be different for charged and neutral pion correlations. (author)

  11. Correlation between thromboelastography and traditional coagulation test parameters in hospitalized dogs

    Directory of Open Access Journals (Sweden)

    Rubanick JV

    2017-02-01

    Full Text Available Jean V Rubanick, Medora B Pashmakova, Micah A Bishop, James W Barr Department of Veterinary Small Animal Clinical Sciences, Texas A&M University, College Station, TX, USA Abstract: A hospital-based, prospective cross-sectional study was used to compare kaolin-activated thromboelastography (TEG parameters with traditional coagulation tests in 29 hospitalized dogs. Cases were included if the attending clinician requested coagulation testing. Blood was obtained from each dog and coagulation (prothrombin time, partial thromboplastin time, antithrombin activity, d-dimer concentration, and fibrinogen concentration and TEG analyses were performed. Hematocrit (Hct was also measured. Traditional coagulation results were evaluated for correlation with those from kaolin-activated TEG. Spearman’s correlation was used to calculate correlation coefficients. Fibrinogen was positively correlated with maximum amplitude (Pearson r=0.72, P<0.001 and global clot strength (Pearson r=0.72, P<0.001. There was no correlation between any of the remaining coagulation variables, TEG parameters, or Hct. Results of kaolin-activated TEG and traditional coagulation tests are not interchangeable means of monitoring coagulation derangements in this intensive care unit patient population. Determination of a true outcome measure is necessary to establish TEG’s clinical relevance to veterinary medicine. Keywords: TEG, thromboelastography, coagulation, hemostasis

  12. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  13. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  14. Analysis and Modeling of Time-Correlated Characteristics of Rainfall-Runoff Similarity in the Upstream Red River Basin

    Directory of Open Access Journals (Sweden)

    Xiuli Sang

    2012-01-01

    Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.

  15. Pitfalls and important issues in testing reliability using intraclass correlation coefficients in orthopaedic research.

    Science.gov (United States)

    Lee, Kyoung Min; Lee, Jaebong; Chung, Chin Youb; Ahn, Soyeon; Sung, Ki Hyuk; Kim, Tae Won; Lee, Hui Jong; Park, Moon Seok

    2012-06-01

    Intra-class correlation coefficients (ICCs) provide a statistical means of testing the reliability. However, their interpretation is not well documented in the orthopedic field. The purpose of this study was to investigate the use of ICCs in the orthopedic literature and to demonstrate pitfalls regarding their use. First, orthopedic articles that used ICCs were retrieved from the Pubmed database, and journal demography, ICC models and concurrent statistics used were evaluated. Second, reliability test was performed on three common physical examinations in cerebral palsy, namely, the Thomas test, the Staheli test, and popliteal angle measurement. Thirty patients were assessed by three orthopedic surgeons to explore the statistical methods testing reliability. Third, the factors affecting the ICC values were examined by simulating the data sets based on the physical examination data where the ranges, slopes, and interobserver variability were modified. Of the 92 orthopedic articles identified, 58 articles (63%) did not clarify the ICC model used, and only 5 articles (5%) described all models, types, and measures. In reliability testing, although the popliteal angle showed a larger mean absolute difference than the Thomas test and the Staheli test, the ICC of popliteal angle was higher, which was believed to be contrary to the context of measurement. In addition, the ICC values were affected by the model, type, and measures used. In simulated data sets, the ICC showed higher values when the range of data sets were larger, the slopes of the data sets were parallel, and the interobserver variability was smaller. Care should be taken when interpreting the absolute ICC values, i.e., a higher ICC does not necessarily mean less variability because the ICC values can also be affected by various factors. The authors recommend that researchers clarify ICC models used and ICC values are interpreted in the context of measurement.

  16. Analysis of Limit Cycle Oscillation Data from the Aeroelastic Test of the SUGAR Truss-Braced Wing Model

    Science.gov (United States)

    Bartels, Robert E.; Funk, Christie; Scott, Robert C.

    2015-01-01

    Research focus in recent years has been given to the design of aircraft that provide significant reductions in emissions, noise and fuel usage. Increases in fuel efficiency have also generally been attended by overall increased wing flexibility. The truss-braced wing (TBW) configuration has been forwarded as one that increases fuel efficiency. The Boeing company recently tested the Subsonic Ultra Green Aircraft Research (SUGAR) Truss-Braced Wing (TBW) wind-tunnel model in the NASA Langley Research Center Transonic Dynamics Tunnel (TDT). This test resulted in a wealth of accelerometer data. Other publications have presented details of the construction of that model, the test itself, and a few of the results of the test. This paper aims to provide a much more detailed look at what the accelerometer data says about the onset of aeroelastic instability, usually known as flutter onset. Every flight vehicle has a location in the flight envelope of flutter onset, and the TBW vehicle is not different. For the TBW model test, the flutter onset generally occurred at the conditions that the Boeing company analysis said it should. What was not known until the test is that, over a large area of the Mach number dynamic pressure map, the model displayed wing/engine nacelle aeroelastic limit cycle oscillation (LCO). This paper dissects that LCO data in order to provide additional insights into the aeroelastic behavior of the model.

  17. Structural correlation method for model reduction and practical estimation of patient specific parameters illustrated on heart rate regulation

    DEFF Research Database (Denmark)

    Ottesen, Johnny T.; Mehlsen, Jesper; Olufsen, Mette

    2014-01-01

    We consider the inverse and patient specific problem of short term (seconds to minutes) heart rate regulation specified by a system of nonlinear ODEs and corresponding data. We show how a recent method termed the structural correlation method (SCM) can be used for model reduction and for obtaining...... a set of practically identifiable parameters. The structural correlation method includes two steps: sensitivity and correlation analysis. When combined with an optimization step, it is possible to estimate model parameters, enabling the model to fit dynamics observed in data. This method is illustrated...... in detail on a model predicting baroreflex regulation of heart rate and applied to analysis of data from a rat and healthy humans. Numerous mathematical models have been proposed for prediction of baroreflex regulation of heart rate, yet most of these have been designed to provide qualitative predictions...

  18. Blind post-test analysis of Phenix End-of-Life natural circulation test with the MARS-LMR

    International Nuclear Information System (INIS)

    Jeong, Hae Yong; Ha, Kwi Seok; Kwon, Young Min; Chang, Won Pyo; Suk, Su Dong; Lee, Kwi Lim

    2010-01-01

    KAERI is developing a system analysis code, MARS-LMR, for the application to a sodium-cooled fast reactor (SFR). This code will be used as a basic tool in the design and analysis of future SFR systems in Korea. Before wide application of a system analysis code, it is required to verify and validate the code models through analyses for appropriate experimental data or analytical results. The MARS-LMR code has been developed from MARS code which had been well verified and validated for a pressurized water reactor (PWR) system. The MARS-LMR code shares the same form of governing equations and solution schemes with MARS code, which eliminates the need of independent verification procedure. However, it is required to validate the applicability of the code to an SFR system because it adopts some dedicated heat transfer models, pressure drop models, and material properties models for a sodium system. Phenix is a medium-sized pool-type SFR successfully operated for 35 years since 1973. This reactor reached its final shutdown in February 2009. An international program of Phenix end-of-life (EOL) test was followed and some valuable information was obtained from the test, which will be useful for the validation of SFR system analysis code. In the present study, the performance of MARS-LMR code is evaluated through a blind calculation with the boundary conditions measured in the real test. The post-test analysis results are also compared with the test data generated in the test

  19. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  20. One stacked-column vibration test and analysis for VHTR core

    International Nuclear Information System (INIS)

    Ikushima, Takeshi; Ishizuka, Hiroshi; Ide, Akira; Hayakawa, Hitoshi; Shingai, Kazuteru.

    1978-07-01

    This paper describes experimental results of the vibration test on a single stacked-column and compares them with the analytical results. A 1/2 scale model of the core element of a very high temperature gas-cooled reactor (VHTR) was set on a shaking table. Sinusoidal waves, response time history waves, beat wave and step wave of input acceleration 100 - 900 gal in the frequency of 0.5 to 15 Hz were used to vibrate the table horizontally. Results are as follows: (1) The column has a non-linear resonance and exhibits a hysteresis response with jump points. (2) The column vibration characteristics is similar to that of the finite beams connected with non-linear soft spring. (3) The column resonance frequency decreases with increasing input acceleration. (4) The impact force increases with increasing input acceleration and boundary gap width. (5) Good correlation in vibration behavior of the stacked-column and impact force on the boundary between test and analysis was obtained. (auth.)

  1. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  2. Improvement of auditing technology of safety analysis through thermal-hydraulic separate effect tests

    Energy Technology Data Exchange (ETDEWEB)

    No, Hee Cheon; Moon, Young Min; Lee, Dong Won; Lee, Sang Ik; Kim, Eung Soo; Yeom, Keum Soo [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2002-03-15

    The objective of the present research is to perform the separate effect tests and to assess the RELAP5/MOD3.2 code for the analysis of thermal-hydraulic behavior in the reactor coolant system and the improvement of the auditing technology of safety analysis. Three Separate Effect Tests (SETs) are the reflux condensation in the U-tube, the direct contact condensation in the hot-leg and the mixture level buildup in the pressurizer. The experimental data and the empirical correlations are obtained through SETs. On the ases of the three SET works, models in RELAP5 are modified and improved, which are compared with the data. The Korea Standard Nuclear Power Plant (KSNP) are assessed using the modified RELAP5. In the reflux condensation test, the data of heat transfer coefficients and flooding are obtained and the condensation models are modified using the non-iterative model, as results, modified code better predicts the data. In the direct contact condensation test, the data of heat transfer coefficients are obtained for the cocurrent and countercurrent flow between the mixture gas and the water in condition of horizontal stratified flow. Several condensation and friction models are modified, which well predict the present data. In the mixture level test, the data for the mixture level and the onset of water draining into the surge line are obtained. The standard RELAP5 over-predicts the mixture level and the void fraction in the pressurizer. Simple modification of model related to the pool void fraction is suggested. The KSNP is assessed using the standard and the modified RELAP5 resulting from the experimental and code works for the SETs. In case of the pressurizer manway opening with available secondary side of the steam generators, the modified code predicts that the collapsed level in the pressurizer is little accumulated. The presence and location of the opening and the secondary condition of the steam generators have an effect on the coolant inventory. The

  3. Analysis of in-R12 CHF data: influence of hydraulic diameter and heating length; test of Weisman boiling crisis model; Analyse de donnees de flux critique en R12: influence du diametre hydraulique et de la longueur chauffante; test du modele de Weisman

    Energy Technology Data Exchange (ETDEWEB)

    Czop, V; Herer, C; Souyri, A; Garnier, J

    1993-09-01

    In order to progress on the comprehensive modelling of the boiling crisis phenomenon, Electricite de France (EDF), Commissariat a l`Energie Atomique (CEA) and FRAMATOME have set up experimental programs involving in-R12 tests: the EDF APHRODITE program and the CEA-EDF-FRAMATOME DEBORA program. The first phase in these programs aims to acquire critical heat flux (CHF) data banks, within large thermal-hydraulic parameter ranges, both in cylindrical and annular configurations, and with different hydraulic diameters and heating lengths. Actually, three data banks have been considered in the analysis, all of them concerning in-R12 round tube tests: - the APHRODITE data bank, obtained at EDF with a 13 mn inside diameter, - the DEBORA data bank, obtained at CEA with a 19.2 mm inside diameter, - the KRISTA data bank, obtained at KfK with a 8 mm inside diameter. The analysis was conducted using CHF correlations and with the help of an advanced mathematical tool using pseudo-cubic thin plate type Spline functions. Two conclusions were drawn: -no influence of the heating length on our CHF results, - the influence of the diameter on the CHF cannot be simply expressed by an exponential function of this parameter, as thermal-hydraulic parameters also have an influence. Some calculations with Weisman and Pei theoretical boiling crisis model have been compared to experimental values: fairly good agreement was obtained, but further study must focus on improving the modelling of the influence of pressure and mass velocity. (authors). 12 figs., 4 tabs., 21 refs.

  4. Canonical correlation analysis for gene-based pleiotropy discovery.

    Directory of Open Access Journals (Sweden)

    Jose A Seoane

    2014-10-01

    Full Text Available Genome-wide association studies have identified a wealth of genetic variants involved in complex traits and multifactorial diseases. There is now considerable interest in testing variants for association with multiple phenotypes (pleiotropy and for testing multiple variants for association with a single phenotype (gene-based association tests. Such approaches can increase statistical power by combining evidence for association over multiple phenotypes or genetic variants respectively. Canonical Correlation Analysis (CCA measures the correlation between two sets of multidimensional variables, and thus offers the potential to combine these two approaches. To apply CCA, we must restrict the number of attributes relative to the number of samples. Hence we consider modules of genetic variation that can comprise a gene, a pathway or another biologically relevant grouping, and/or a set of phenotypes. In order to do this, we use an attribute selection strategy based on a binary genetic algorithm. Applied to a UK-based prospective cohort study of 4286 women (the British Women's Heart and Health Study, we find improved statistical power in the detection of previously reported genetic associations, and identify a number of novel pleiotropic associations between genetic variants and phenotypes. New discoveries include gene-based association of NSF with triglyceride levels and several genes (ACSM3, ERI2, IL18RAP, IL23RAP and NRG1 with left ventricular hypertrophy phenotypes. In multiple-phenotype analyses we find association of NRG1 with left ventricular hypertrophy phenotypes, fibrinogen and urea and pleiotropic relationships of F7 and F10 with Factor VII, Factor IX and cholesterol levels.

  5. Investigating the cognitive precursors of emotional response to cancer stress: re-testing Lazarus's transactional model.

    Science.gov (United States)

    Hulbert-Williams, N J; Morrison, V; Wilkinson, C; Neal, R D

    2013-02-01

    Lazarus's Transactional Model of stress and coping underwent significant theoretical development through the 1990s to better incorporate emotional reactions to stress with their appraisal components. Few studies have robustly explored the full model. This study aimed to do so within the context of a major life event: cancer diagnosis. A repeated measures design was used whereby data were collected using self-report questionnaire at baseline (soon after diagnosis), and 3- and 6-month follow-up. A total of 160 recently diagnosed cancer patients were recruited (mean time since diagnosis = 46 days). Their mean age was 64.2 years. Data on appraisals, core-relational themes, and emotions were collected. Data were analysed using both Spearman's correlation tests and multivariate regression modelling. Longitudinal analysis demonstrated weak correlation between change scores of theoretically associated components and some emotions correlated more strongly with cognitions contradicting theoretical expectations. Cross-sectional multivariate testing of the ability of cognitions to explain variance in emotion was largely theory inconsistent. Although data support the generic structure of the Transactional Model, they question the model specifics. Larger scale research is needed encompassing a wider range of emotions and using more complex statistical testing. WHAT IS ALREADY KNOWN ON THIS SUBJECT?: • Stress processes are transactional and coping outcome is informed by both cognitive appraisal of the stressor and the individual's emotional response (Lazarus & Folkman, 1984). • Lazarus (1999) made specific hypotheses about which particular stress appraisals would determine which emotional response, but only a small number of these relationships have been robustly investigated. • Previous empirical testing of this theory has been limited by design and statistical limitations. WHAT DOES THIS STUDY ADD?: • This study empirically investigates the cognitive precedents of a

  6. Testing for Volatility Co-movement in Bivariate Stochastic Volatility Models

    OpenAIRE

    Chen, Jinghui; Kobayashi, Masahito; McAleer, Michael

    2017-01-01

    markdownabstractThe paper considers the problem of volatility co-movement, namely as to whether two financial returns have perfectly correlated common volatility process, in the framework of multivariate stochastic volatility models and proposes a test which checks the volatility co-movement. The proposed test is a stochastic volatility version of the co-movement test proposed by Engle and Susmel (1993), who investigated whether international equity markets have volatility co-movement using t...

  7. Correlated binomial models and correlation structures

    International Nuclear Information System (INIS)

    Hisakado, Masato; Kitsukawa, Kenji; Mori, Shintaro

    2006-01-01

    We discuss a general method to construct correlated binomial distributions by imposing several consistent relations on the joint probability function. We obtain self-consistency relations for the conditional correlations and conditional probabilities. The beta-binomial distribution is derived by a strong symmetric assumption on the conditional correlations. Our derivation clarifies the 'correlation' structure of the beta-binomial distribution. It is also possible to study the correlation structures of other probability distributions of exchangeable (homogeneous) correlated Bernoulli random variables. We study some distribution functions and discuss their behaviours in terms of their correlation structures

  8. Defeat and entrapment: more than meets the eye? Applying network analysis to estimate dimensions of highly correlated constructs.

    Science.gov (United States)

    Forkmann, Thomas; Teismann, Tobias; Stenzel, Jana-Sophie; Glaesmer, Heide; de Beurs, Derek

    2018-01-25

    Defeat and entrapment have been shown to be of central relevance to the development of different disorders. However, it remains unclear whether they represent two distinct constructs or one overall latent variable. One reason for the unclarity is that traditional factor analytic techniques have trouble estimating the right number of clusters in highly correlated data. In this study, we applied a novel approach based on network analysis that can deal with correlated data to establish whether defeat and entrapment are best thought of as one or multiple constructs. Explanatory graph analysis was used to estimate the number of dimensions within the 32 items that make up the defeat and entrapment scales in two samples: an online community sample of 480 participants, and a clinical sample of 147 inpatients admitted to a psychiatric hospital after a suicidal attempt or severe suicidal crisis. Confirmatory Factor analysis (CFA) was used to test whether the proposed structure fits the data. In both samples, bootstrapped exploratory graph analysis suggested that the defeat and entrapment items belonged to different dimensions. Within the entrapment items, two separate dimensions were detected, labelled internal and external entrapment. Defeat appeared to be multifaceted only in the online sample. When comparing the CFA outcomes of the one, two, three and four factor models, the one factor model was preferred. Defeat and entrapment can be viewed as distinct, yet, highly associated constructs. Thus, although replication is needed, results are in line with theories differentiating between these two constructs.

  9. Correlations between cerebral glucose metabolism and neuropsychological test performance in nonalcoholic cirrhotics.

    Science.gov (United States)

    Lockwood, Alan H; Weissenborn, Karin; Bokemeyer, Martin; Tietge, U; Burchert, Wolfgang

    2002-03-01

    Many cirrhotics have abnormal neuropsychological test scores. To define the anatomical-physiological basis for encephalopathy in nonalcoholic cirrhotics, we performed resting-state fluorodeoxyglucose positron emission tomographic scans and administered a neuropsychological test battery to 18 patients and 10 controls. Statistical parametric mapping correlated changes in regional glucose metabolism with performance on the individual tests and a composite battery score. In patients without overt encephalopathy, poor performance correlated with reductions in metabolism in the anterior cingulate. In all patients, poor performance on the battery was positively correlated (p glucose metabolism in bifrontal and biparietal regions of the cerebral cortex and negatively correlated with metabolism in hippocampal, lingual, and fusiform gyri and the posterior putamen. Similar patterns of abnormal metabolism were found when comparing the patients to 10 controls. Metabolic abnormalities in the anterior attention system and association cortices mediating executive and integrative function form the pathophysiological basis for mild hepatic encephalopathy.

  10. Correlation between two parameters of mice behaviour in the open field test

    OpenAIRE

    Stojanović, Nikola M.; Ranđelović, Pavle J.; Radulović, Niko S.

    2017-01-01

    The open field test is being used extensively for the determination of different aspects of animal behaviour for over seventy years. The correlation between different behavioural parameters obtained in this test, although previously studied, is still debatable. Thus, we aimed to analyze and correlate behaviour scores to estimate the importance of individual parameters in this type of experiment. The open field test was performed on male BALB/c mice treated with either saline (10 ml/kg) or dia...

  11. The correlative analysis between CBF measured by SPECT and Chinese reading test in childhood reading disorder

    International Nuclear Information System (INIS)

    Wu Yonggang; Su Jianzhi; He Jianjun; Yang Zhiwei; Liu Guofeng

    2002-01-01

    Objective: To investigate changes of cerebral blood flow (CBF) and its association with Chinese reading skill diagnostic test (CRSDT) in childhood reading disorder (RD). Methods: In 25 RD children and 20 age-matched control subjects, the authors quantitatively determined CBF and regional cerebral blood flow (rCBF) with SPECT using the non-blood-withdrew method. The authors studied the correlation between the CBF and the total raw scores by CRSDT. Results: CBF in case group was (38.87 +- 3.77) ml·100 g -1 ·min -1 and was significantly lower than that in control group [43.65 +- 2.64) mL·100 g -1 ·min -1 (P < 0.01)]. These reduction in CBF correlated with the total raw scores by CRSDT. Conclusion: These results suggest the children with reading disorder have CBF reduction and SPECT is useful for evaluation of cerebral functioning in reading disorder children

  12. Transverse spin correlations of the random transverse-field Ising model

    Science.gov (United States)

    Iglói, Ferenc; Kovács, István A.

    2018-03-01

    The critical behavior of the random transverse-field Ising model in finite-dimensional lattices is governed by infinite disorder fixed points, several properties of which have already been calculated by the use of the strong disorder renormalization-group (SDRG) method. Here we extend these studies and calculate the connected transverse-spin correlation function by a numerical implementation of the SDRG method in d =1 ,2 , and 3 dimensions. At the critical point an algebraic decay of the form ˜r-ηt is found, with a decay exponent being approximately ηt≈2 +2 d . In d =1 the results are related to dimer-dimer correlations in the random antiferromagnetic X X chain and have been tested by numerical calculations using free-fermionic techniques.

  13. Correlations in multiple production on nuclei and Glauber model of multiple scattering

    International Nuclear Information System (INIS)

    Zoller, V.R.; Nikolaev, N.N.

    1982-01-01

    Critical analysis of possibility for describing correlation phenomena during multiple production on nuclei within the framework of the Glauber multiple seattering model generalized for particle production processes with Capella, Krziwinski and Shabelsky has been performed. It was mainly concluded that the suggested generalization of the Glauber model gives dependences on Ng(Np) (where Ng-the number of ''grey'' tracess, and Np-the number of protons flying out of nucleus) and, eventually, on #betta# (where #betta#-the number of intranuclear interactions) contradicting experience. Independent of choice of relation between #betta# and Ng(Np) in the model the rapidity corrletor Rsub(eta) is overstated in the central region and understated in the region of nucleus fragmentation. In mean multiplicities these two contradictions of experience are disguised with random compensation and agreement with experience in Nsub(S) (function of Ng) cannot be an argument in favour of the model. It is concluded that eiconal model doesn't permit to quantitatively describe correlation phenomena during the multiple production on nuclei

  14. Timescale Correlation between Marine Atmospheric Exposure and Accelerated Corrosion Testing - Part 2

    Science.gov (United States)

    Montgomery, Eliza L.; Calle, Luz Marina; Curran, Jerome C.; Kolody, Mark R.

    2012-01-01

    Evaluation of metals to predict service life of metal-based structures in corrosive environments has long relied on atmospheric exposure test sites. Traditional accelerated corrosion testing relies on mimicking the exposure conditions, often incorporating salt spray and ultraviolet (UV) radiation, and exposing the metal to continuous or cyclic conditions similar to those of the corrosive environment. Their reliability to correlate to atmospheric exposure test results is often a concern when determining the timescale to which the accelerated tests can be related. Accelerated corrosion testing has yet to be universally accepted as a useful tool in predicting the long-term service life of a metal, despite its ability to rapidly induce corrosion. Although visual and mass loss methods of evaluating corrosion are the standard, and their use is crucial, a method that correlates timescales from accelerated testing to atmospheric exposure would be very valuable. This paper presents work that began with the characterization of the atmospheric environment at the Kennedy Space Center (KSC) Beachside Corrosion Test Site. The chemical changes that occur on low carbon steel, during atmospheric and accelerated corrosion conditions, were investigated using surface chemistry analytical methods. The corrosion rates and behaviors of panels subjected to long-term and accelerated corrosion conditions, involving neutral salt fog and alternating seawater spray, were compared to identify possible timescale correlations between accelerated and long-term corrosion performance. The results, as well as preliminary findings on the correlation investigation, are presented.

  15. Air trapping on HRCT in asthmatics: correlation with pulmonary function test

    International Nuclear Information System (INIS)

    Hwang, Jung Hwa; Cha, Chull Hee; Park, Jai Soung; Kim, Young Beom; Lee, Hae Kyung; Choi, Deuk Lin; Kim, Kyung Ho; Park, Choon Sik

    1997-01-01

    To evaluate on the basis of the pulmonary function test the correlation between the extent of air trapping on HRCT with the severity of airway obstruction and also to identify the prognostic effect of the extent of air trapping after treatment of asthma. Thirty five patients with clinically diagnosed bronchial asthma and air trapping, as seen on HRCT, were included in this study. We quantitatively analysed on HRCT the extent of air trapping and then statistically compared this with the clinical parameters of the pulmonary function test. We classified the patients into two groups on the basis of the pulmonary function test and clinical status : Group 1 (N=35), the total number of asthmatic patients; Group 2 (N=18), relatively stable asthmatics without acute asthmatic attack who showed FEV1 of more than 80% of the predicted value. Using the functional paramenters of PEFR, one of the obijective indicators of improvement in airway obstruction, we also classified the patients into three groups on the basis of interval between treatment and clinical improvement. The result of this was as follows : group 1, asymptomatic group (initial PEFR within normal limit, N=7); group 2, early responder (improvement of PEFR within three hospital days, N=18); group 3, late responder (improvement of PEFR within fourteen hospital days should there be a number here). Using HRCT, we then statistically analysed the differences between the three groups in the extent of air trapping. Among the total of 35 asthmatics, the extent of air trapping on HRCT showed significant correlation with FEV1 (r= -0.6161, p < 0.001) and MEFR (r= -0.6012, p < 0.001). Among the relatively stable asthmatics who showed FEV1 more than 80% of the predicted value, MEFR (r= -0.7553, p < 0.001) and FEF75 (r= -0.7529, p=0.012) showed statistically significant correlation with the extent of air trapping on HRCT, but there was no significant correlation between air trapping on HRCT and FEV1. In the three groups of

  16. Air trapping on HRCT in asthmatics: correlation with pulmonary function test

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Jung Hwa; Cha, Chull Hee; Park, Jai Soung; Kim, Young Beom; Lee, Hae Kyung; Choi, Deuk Lin; Kim, Kyung Ho; Park, Choon Sik [Soonchunhyang Univ. College of Medicine, Seoul (Korea, Republic of)

    1997-02-01

    To evaluate on the basis of the pulmonary function test the correlation between the extent of air trapping on HRCT with the severity of airway obstruction and also to identify the prognostic effect of the extent of air trapping after treatment of asthma. Thirty five patients with clinically diagnosed bronchial asthma and air trapping, as seen on HRCT, were included in this study. We quantitatively analysed on HRCT the extent of air trapping and then statistically compared this with the clinical parameters of the pulmonary function test. We classified the patients into two groups on the basis of the pulmonary function test and clinical status : Group 1 (N=35), the total number of asthmatic patients; Group 2 (N=18), relatively stable asthmatics without acute asthmatic attack who showed FEV1 of more than 80% of the predicted value. Using the functional paramenters of PEFR, one of the obijective indicators of improvement in airway obstruction, we also classified the patients into three groups on the basis of interval between treatment and clinical improvement. The result of this was as follows : group 1, asymptomatic group (initial PEFR within normal limit, N=7); group 2, early responder (improvement of PEFR within three hospital days, N=18); group 3, late responder (improvement of PEFR within fourteen hospital days should there be a number here). Using HRCT, we then statistically analysed the differences between the three groups in the extent of air trapping. Among the total of 35 asthmatics, the extent of air trapping on HRCT showed significant correlation with FEV1 (r= -0.6161, p < 0.001) and MEFR (r= -0.6012, p < 0.001). Among the relatively stable asthmatics who showed FEV1 more than 80% of the predicted value, MEFR (r= -0.7553, p < 0.001) and FEF75 (r= -0.7529, p=0.012) showed statistically significant correlation with the extent of air trapping on HRCT, but there was no significant correlation between air trapping on HRCT and FEV1. In the three groups of

  17. Taking serial correlation into account in tests of the mean

    International Nuclear Information System (INIS)

    Zwiers, F.W.; Storch, H. von

    1993-01-01

    The comparison of means derived from samples of noisy data is a standard part of climatology. When the data are not serially correlated the appropriate statistical tool for this task is usually the conventional Student's t-test. However, data frequently are serially correlated in climatological applications with the result that the t-tests in its standard form is not applicable. The usual solution to this problem is to scale the t-statistic by a factor which depends upon the equivalent sample size n e . We show, by means of simulations, that the revised t-test is often conservative (the actual significance level is smaller than the specified significance level) when the equivalent sample size is known. However, in most practical cases the equivalent sample size is not known. Then the test becomes liberal (the actual significance level is greater than the specified significance level). This systematic error becomes small when the true equivalent sample size is large (greater than approximately 30). We re-examine the difficulties inherent in difference of means tests when there is serial dependence. We provide guidelines for the application of the 'usual' t-test and propose two alternative tests which substantially improve upon the 'usual' t-test when samples are small. (orig.)

  18. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  19. [Girls are more successful than boys at the university. Gender group differences in models integrating motivational and aggressive components correlated with Test-Anxiety].

    Science.gov (United States)

    Masson, A-M; Hoyois, Ph; Cadot, M; Nahama, V; Petit, F; Ansseau, M

    2004-01-01

    anxiety while in girls, by contrast, such a correlation didn't exist, thus involving higher anxiety. That way, on the one hand, intrinsic and extrinsic motivations by female students complementarily operated on the sense of incompetence and consequently on anxiety, the emotional component of test anxiety; on the other hand, by male students, intrinsic motivation had a negative correlation with the sense of incompetence and a lower correlation with extrinsic motivation, thereby shedding some light on the problem of anxiety level differences according to gender. More, that observation corresponded well to the model of self-worth where test anxiety was understood as a manifestation of perceived incompetence and as a defensive way to ward off negative self-evaluation; that model suited particularly well to boys and explained their attempts to maintain self-worth when risking academic failure. The present research assumes that independence or combination of motivation components is also correlated to different expressions of aggressiveness: hostility corresponding to threat and characterizing more girls while physical aggression is corresponding to personal challenge, a more masculine attribution. If fighting against the sense of incompetence actually characterizes men and consequently shows too the competitive aspects of performance strong enough to mobilize intrinsic motivation, what would be expected regarding the notion of threat suspected to be predominant in girls? The idea of using a questionnaire discriminating the specific dimensions of aggressiveness in fact the Aggression Questionnaire should meet the following purposes: At first establish a French version of that aggression questionnaire, perform the factorial analyses and internal consistency, compare them with other previous samples, then differentiate gender in general and in failure versus success situations. Finally include the different components of aggressiveness in the first described model and build a new

  20. Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells

    Science.gov (United States)

    Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke

    2011-06-01

    Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.

  1. Detection of Impaired Cerebral Autoregulation Using Selected Correlation Analysis: A Validation Study.

    Science.gov (United States)

    Proescholdt, Martin A; Faltermeier, Rupert; Bele, Sylvia; Brawanski, Alexander

    2017-01-01

    Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data.

  2. Detection of Impaired Cerebral Autoregulation Using Selected Correlation Analysis: A Validation Study

    Directory of Open Access Journals (Sweden)

    Martin A. Proescholdt

    2017-01-01

    Full Text Available Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca, correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp. In this study we compared the results of the sca with the pressure reactivity index (PRx, an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc. The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data.

  3. Correlation of alluvial deposits at the Nevada Test Site

    International Nuclear Information System (INIS)

    Grothaus, B.; Howard, N.

    1977-01-01

    Because characteristics of rock layers and problems in drilling must be studied before radioactive waste can be safely contained, an evaluation was made of methods for correlating alluvial deposits at Yucca Flat of the Nevada Test Site (NTS). Although correlation of Tertiary volcanic tuff beds at the NTS has been successfully achieved, correlation of stratigraphic zones in the overlying alluvium has posed technical difficulties. We have evaluated several methods for correlating alluvial deposits from drillholes, including electric resistivity logs (E logs), visual examination of sidewall samples and comparison of their carbonate (CO 2 ) content, downhole stereo photography for identifying debris flow deposits, caliche age-dating, and specific yield and permeability measurements of deposits. For predicting the thickness of zones having similar physical properties in the alluvium, E log measurements were found to be the most useful of these methods

  4. Uncertainty analysis of constant amplitude fatigue test data employing the six parameters random fatigue limit model

    Directory of Open Access Journals (Sweden)

    Leonetti Davide

    2018-01-01

    Full Text Available Estimating and reducing uncertainty in fatigue test data analysis is a relevant task in order to assess the reliability of a structural connection with respect to fatigue. Several statistical models have been proposed in the literature with the aim of representing the stress range vs. endurance trend of fatigue test data under constant amplitude loading and the scatter in the finite and infinite life regions. In order to estimate the safety level of the connection also the uncertainty related to the amount of information available need to be estimated using the methods provided by the theory of statistic. The Bayesian analysis is employed to reduce the uncertainty due to the often small amount of test data by introducing prior information related to the parameters of the statistical model. In this work, the inference of fatigue test data belonging to cover plated steel beams is presented. The uncertainty is estimated by making use of Bayesian and frequentist methods. The 5% quantile of the fatigue life is estimated by taking into account the uncertainty related to the sample size for both a dataset containing few samples and one containing more data. The S-N curves resulting from the application of the employed methods are compared and the effect of the reduction of uncertainty in the infinite life region is quantified.

  5. Dynamical analysis of a PWR internals using super-elements in an integrated 3-D model model. Part 1: model description and static tests

    International Nuclear Information System (INIS)

    Jesus Miranda, C.A. de.

    1992-01-01

    An integrated 3-D model of a research PWR reactor core support internals structures was developed for its dynamic analyses. The static tests for the validation of the model are presented. There are about 90 super-elements with, approximately, 85000 degrees of freedom (DoF), 8200 masters DoF, 12000 elements with about 8400 thin shell elements. A DEC VAX computer 11/785 model and the ANSYS program were used. If impacts occurs the spectral seismic analysis will be changed to a non-linear one with direct integration of the displacement pulse derived from the seismic accelerogram. This last will be obtained from the seismic acceleration response spectra. (author)

  6. Correlation analysis of the physiological factors controlling fundamental voice frequency.

    Science.gov (United States)

    Atkinson, J E

    1978-01-01

    A technique has been developed to obtain a quantitative measure of correlation between electromyographic (EMG) activity of various laryngeal muscles, subglottal air pressure, and the fundamental frequency of vibration of the vocal folds (Fo). Data were collected and analyzed on one subject, a native speaker of American English. The results show that an analysis of this type can provide a useful measure of correlation between the physiological and acoustical events in speech and, furthermore, can yield detailed insights into the organization and nature of the speech production process. In particular, based on these results, a model is suggested of Fo control involving laryngeal state functions that seems to agree with present knowledge of laryngeal control and experimental evidence.

  7. Posttest analysis of MIST Test 330302 using TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Boyack, B.E.

    1992-09-01

    This report discusses a posttest analysis of Multi-Loop Integral System Test (MIST) 330302 which has been performed using TRAC-PF1/MOD1. This test was one of group performed in the MIST facility to investigate high-pressure injection (HPI)-power-operated relief valve (PORV) cooling, also known as feed-and-bleed cooling. In Test 330302, HPI cooling was delayed 20 min after opening and locking the PORV open to induce extensive system voiding. We have concluded that the TRAC-calculated results are in reasonable overall agreement with the data for Test 330302. All major trends and phenomena were correctly predicted. Differences observed between the measured and calculated results have been traced and related, in part, to deficiencies in our knowledge of the facility configuration and operation. We have identified two models forwhich additional review is appropriate. However, in general, the TRAC closure models and correlations appear to be adequate for the prediction of the phenomena expected to occur during feed-and-bleed transientsin the MIST facility. We believe that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications. Conclusions reached regarding use of the code to calculate similar phenomena in full-size plants (scaling implications) and regulatory implications of this work are also presented

  8. Quantitative analysis of LISA pathfinder test-mass noise

    International Nuclear Information System (INIS)

    Ferraioli, Luigi; Congedo, Giuseppe; Hueller, Mauro; Vitale, Stefano; Hewitson, Martin; Nofrarias, Miquel; Armano, Michele

    2011-01-01

    LISA Pathfinder (LPF) is a mission aiming to test the critical technology for the forthcoming space-based gravitational-wave detectors. The main scientific objective of the LPF mission is to demonstrate test masses free falling with residual accelerations below 3x10 -14 m s -2 /√(Hz) at 1 mHz. Reaching such an ambitious target will require a significant amount of system optimization and characterization, which will in turn require accurate and quantitative noise analysis procedures. In this paper, we discuss two main problems associated with the analysis of the data from LPF: i) excess noise detection and ii) noise parameter identification. The mission is focused on the low-frequency region ([0.1, 10] mHz) of the available signal spectrum. In such a region, the signal is dominated by the force noise acting on test masses. At the same time, the mission duration is limited to 90 days and typical data segments will be 24 hours in length. Considering those constraints, noise analysis is expected to deal with a limited amount of non-Gaussian data, since the spectrum statistics will be far from Gaussian and the lowest available frequency is limited by the data length. In this paper, we analyze the details of the expected statistics for spectral data and develop two suitable excess noise estimators. One is based on the statistical properties of the integrated spectrum, the other is based on the Kolmogorov-Smirnov test. The sensitivity of the estimators is discussed theoretically for independent data, then the algorithms are tested on LPF synthetic data. The test on realistic LPF data allows the effect of spectral data correlations on the efficiency of the different noise excess estimators to be highlighted. It also reveals the versatility of the Kolmogorov-Smirnov approach, which can be adapted to provide reasonable results on correlated data from a modified version of the standard equations for the inversion of the test statistic. Closely related to excess noise

  9. Multi Resolution In-Situ Testing and Multiscale Simulation for Creep Fatigue Damage Analysis of Alloy 617

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yongming [Arizona State Univ., Tempe, AZ (United States). School for Engineering of Matter, Transport and Energy; Oskay, Caglar [Vanderbilt Univ., Nashville, TN (United States). Dept. of Civil and Environmental Engineering

    2017-04-30

    This report outlines the research activities that were carried out for the integrated experimental and simulation investigation of creep-fatigue damage mechanism and life prediction of Nickel-based alloy, Inconel 617 at high temperatures (950° and 850°). First, a novel experimental design using a hybrid control technique is proposed. The newly developed experimental technique can generate different combinations of creep and fatigue damage by changing the experimental design parameters. Next, detailed imaging analysis and statistical data analysis are performed to quantify the failure mechanisms of the creep fatigue of alloy 617 at high temperatures. It is observed that the creep damage is directly associated with the internal voids at the grain boundaries and the fatigue damage is directly related to the surface cracking. It is also observed that the classical time fraction approach does not has a good correlation with the experimental observed damage features. An effective time fraction parameter is seen to have an excellent correlation with the material microstructural damage. Thus, a new empirical damage interaction diagram is proposed based on the experimental observations. Following this, a macro level viscoplastic model coupled with damage is developed to simulate the stress/strain response under creep fatigue loadings. A damage rate function based on the hysteresis energy and creep energy is proposed to capture the softening behavior of the material and a good correlation with life prediction and material hysteresis behavior is observed. The simulation work is extended to include the microstructural heterogeneity. A crystal plasticity finite element model considering isothermal and large deformation conditions at the microstructural scale has been developed for fatigue, creep-fatigue as well as creep deformation and rupture at high temperature. The model considers collective dislocation glide and climb of the grains and progressive damage accumulation of

  10. The Butterfly Effect: Correlations Between Modeling in Nuclear-Particle Physics and Socioeconomic Factors

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W.; Dressendorfer, Paul V.

    2010-01-01

    A scientometric analysis has been performed on selected physics journals to estimate the presence of simulation and modeling in physics literature in the past fifty years. Correlations between the observed trends and several social and economical factors have been evaluated.

  11. Bayesian Correlation Analysis for Sequence Count Data.

    Directory of Open Access Journals (Sweden)

    Daniel Sánchez-Taltavull

    Full Text Available Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities' measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low-especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities' signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset.

  12. pETM: a penalized Exponential Tilt Model for analysis of correlated high-dimensional DNA methylation data.

    Science.gov (United States)

    Sun, Hokeun; Wang, Ya; Chen, Yong; Li, Yun; Wang, Shuang

    2017-06-15

    DNA methylation plays an important role in many biological processes and cancer progression. Recent studies have found that there are also differences in methylation variations in different groups other than differences in methylation means. Several methods have been developed that consider both mean and variance signals in order to improve statistical power of detecting differentially methylated loci. Moreover, as methylation levels of neighboring CpG sites are known to be strongly correlated, methods that incorporate correlations have also been developed. We previously developed a network-based penalized logistic regression for correlated methylation data, but only focusing on mean signals. We have also developed a generalized exponential tilt model that captures both mean and variance signals but only examining one CpG site at a time. In this article, we proposed a penalized Exponential Tilt Model (pETM) using network-based regularization that captures both mean and variance signals in DNA methylation data and takes into account the correlations among nearby CpG sites. By combining the strength of the two models we previously developed, we demonstrated the superior power and better performance of the pETM method through simulations and the applications to the 450K DNA methylation array data of the four breast invasive carcinoma cancer subtypes from The Cancer Genome Atlas (TCGA) project. The developed pETM method identifies many cancer-related methylation loci that were missed by our previously developed method that considers correlations among nearby methylation loci but not variance signals. The R package 'pETM' is publicly available through CRAN: http://cran.r-project.org . sw2206@columbia.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. WGCNA: an R package for weighted correlation network analysis.

    Science.gov (United States)

    Langfelder, Peter; Horvath, Steve

    2008-12-29

    Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.

  14. Assessment of correlations and models for prediction of CHF in subcooled flow boiling

    International Nuclear Information System (INIS)

    Celata, G.P.; Mariani, A.; Cumo, M.

    1992-01-01

    This paper provides an analysis of available correlations and models for the prediction of Critical Heat Flux (CHF) in subcooled flow boiling in the ranges of interest of fusion reactor thermal-hydraulic conditions, i.e., high inlet liquid subcooling and velocity and small channel diameter and length. The aim of the study was to establish the limits of validity of present predictive tools (most of them were proposed with reference to LWR thermal-hydraulic studies) in the above conditions. The reference data-set represents most of available data covering wide ranges of operating conditions in the framework of present interest (0.1 s ub, in < 230 K). Among the tens of predictive tools available in literature, four correlations (Levy, Westinghouse, modified-Tong and Tong-75) and three models (Weisman and Ileslamlou Lee and Mudawar and Katto) were selected. The modified-Tong correlation and the Katto model seem to be reliable predictive tools for the calculation of the CHF in subcooled flow boiling

  15. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    Science.gov (United States)

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  16. Testing for Volatility Co-movement in Bivariate Stochastic Volatility Models

    NARCIS (Netherlands)

    J. Chen (Jinghui); M. Kobayashi (Masahito); M.J. McAleer (Michael)

    2017-01-01

    markdownabstractThe paper considers the problem of volatility co-movement, namely as to whether two financial returns have perfectly correlated common volatility process, in the framework of multivariate stochastic volatility models and proposes a test which checks the volatility co-movement. The

  17. Diagrammatic analysis of correlations in polymer fluids: Cluster diagrams via Edwards' field theory

    International Nuclear Information System (INIS)

    Morse, David C.

    2006-01-01

    Edwards' functional integral approach to the statistical mechanics of polymer liquids is amenable to a diagrammatic analysis in which free energies and correlation functions are expanded as infinite sums of Feynman diagrams. This analysis is shown to lead naturally to a perturbative cluster expansion that is closely related to the Mayer cluster expansion developed for molecular liquids by Chandler and co-workers. Expansion of the functional integral representation of the grand-canonical partition function yields a perturbation theory in which all quantities of interest are expressed as functionals of a monomer-monomer pair potential, as functionals of intramolecular correlation functions of non-interacting molecules, and as functions of molecular activities. In different variants of the theory, the pair potential may be either a bare or a screened potential. A series of topological reductions yields a renormalized diagrammatic expansion in which collective correlation functions are instead expressed diagrammatically as functionals of the true single-molecule correlation functions in the interacting fluid, and as functions of molecular number density. Similar renormalized expansions are also obtained for a collective Ornstein-Zernicke direct correlation function, and for intramolecular correlation functions. A concise discussion is given of the corresponding Mayer cluster expansion, and of the relationship between the Mayer and perturbative cluster expansions for liquids of flexible molecules. The application of the perturbative cluster expansion to coarse-grained models of dense multi-component polymer liquids is discussed, and a justification is given for the use of a loop expansion. As an example, the formalism is used to derive a new expression for the wave-number dependent direct correlation function and recover known expressions for the intramolecular two-point correlation function to first-order in a renormalized loop expansion for coarse-grained models of

  18. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    Science.gov (United States)

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  19. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  20. Testing an integrated behavioural and biomedical model of disability in N-of-1 studies with chronic pain.

    Science.gov (United States)

    Quinn, Francis; Johnston, Marie; Johnston, Derek W

    2013-01-01

    Previous research has supported an integrated biomedical and behavioural model explaining activity limitations. However, further tests of this model are required at the within-person level, because while it proposes that the constructs are related within individuals, it has primarily been tested between individuals in large group studies. We aimed to test the integrated model at the within-person level. Six correlational N-of-1 studies in participants with arthritis, chronic pain and walking limitations were carried out. Daily measures of theoretical constructs were collected using a hand-held computer (PDA), the activity was assessed by self-report and accelerometer and the data were analysed using time-series analysis. The biomedical model was not supported as pain impairment did not predict activity, so the integrated model was supported partially. Impairment predicted intention to move around, while perceived behavioural control (PBC) and intention predicted activity. PBC did not predict activity limitation in the expected direction. The integrated model of disability was partially supported within individuals, especially the behavioural elements. However, results suggest that different elements of the model may drive activity (limitations) for different individuals. The integrated model provides a useful framework for understanding disability and suggests interventions, and the utility of N-of-1 methodology for testing theory is illustrated.

  1. Modeling and preliminary thermal analysis of the capsule for a creep test in HANARO

    International Nuclear Information System (INIS)

    Choi, Myoung Hwan; Cho, Man Soon; Choo, Kee Nam; Kang, Young Hwan; Sohn, Jae Min; Shin, Yoon Taeg; Park, Sung Jae; Kim, Bong Goo; Kim, Young Jin

    2005-01-01

    A creep capsule is a device to investigate the creep characteristics of nuclear materials during inpile irradiation tests. To obtain the design data of the capsule through a preliminary thermal analysis, a 2-dimensional model for the cross section of the capsule including the specimens and components is generated, and an analysis using the ANSYS program is performed. The gamma-heating rates of the materials for the HANARO power of 30MW are considered, and the effect of the gap size and the control rod position on the temperature of the specimen is discussed. From the analysis it is found that the gap between the thermal media and the external tube has a significant effect on the temperature of the specimen. The temperature by increasing the position of the control rod is decreased

  2. Bayesian Analysis for Dynamic Generalized Linear Latent Model with Application to Tree Survival Rate

    Directory of Open Access Journals (Sweden)

    Yu-sheng Cheng

    2014-01-01

    Full Text Available Logistic regression model is the most popular regression technique, available for modeling categorical data especially for dichotomous variables. Classic logistic regression model is typically used to interpret relationship between response variables and explanatory variables. However, in real applications, most data sets are collected in follow-up, which leads to the temporal correlation among the data. In order to characterize the different variables correlations, a new method about the latent variables is introduced in this study. At the same time, the latent variables about AR (1 model are used to depict time dependence. In the framework of Bayesian analysis, parameters estimates and statistical inferences are carried out via Gibbs sampler with Metropolis-Hastings (MH algorithm. Model comparison, based on the Bayes factor, and forecasting/smoothing of the survival rate of the tree are established. A simulation study is conducted to assess the performance of the proposed method and a pika data set is analyzed to illustrate the real application. Since Bayes factor approaches vary significantly, efficiency tests have been performed in order to decide which solution provides a better tool for the analysis of real relational data sets.

  3. Model tests and elasto-plastic finite element analysis on multicavity type PCRV

    International Nuclear Information System (INIS)

    Nojiri, Y.; Yamazaki, M.; Kotani, K.; Matsuzaki, Y.

    1978-01-01

    Multicavity type PCRV models were tested to investigate elastic stress distributions, cracking and failure mode of the models, and to determine the adequacy and relative accuracy of finite element structural analyses. The behavior of the models under pressure was investigated, and it was found that the predictions of the analyses showed a good agreement with the test results

  4. Pearson's chi-square test and rank correlation inferences for clustered data.

    Science.gov (United States)

    Shih, Joanna H; Fay, Michael P

    2017-09-01

    Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  5. Dimensionless numbers and correlating equations for the analysis of the membrane-gas diffusion electrode assembly in polymer electrolyte fuel cells

    Science.gov (United States)

    Gyenge, E. L.

    The Quraishi-Fahidy method [Can. J. Chem. Eng. 59 (1981) 563] was employed to derive characteristic dimensionless numbers for the membrane-electrolyte, cathode catalyst layer and gas diffuser, respectively, based on the model presented by Bernardi and Verbrugge for polymer electrolyte fuel cells [AIChE J. 37 (1991) 1151]. Monomial correlations among dimensionless numbers were developed and tested against experimental and mathematical modeling results. Dimensionless numbers comparing the bulk and surface-convective ionic conductivities, the electric and viscous forces and the current density and the fixed surface charges, were employed to describe the membrane ohmic drop and its non-linear dependence on current density due to membrane dehydration. The analysis of the catalyst layer yielded electrode kinetic equivalents of the second Damköhler number and Thiele modulus, influencing the penetration depth of the oxygen reduction front based on the pseudohomogeneous film model. The correlating equations for the catalyst layer could describe in a general analytical form, all the possible electrode polarization scenarios such as electrode kinetic control coupled or not with ionic and/or oxygen mass transport limitation. For the gas diffusion-backing layer correlations are presented in terms of the Nusselt number for mass transfer in electrochemical systems. The dimensionless number-based correlating equations for the membrane electrode assembly (MEA) could provide a practical approach to quantify single-cell polarization results obtained under a variety of experimental conditions and to implement them in models of the fuel cell stack.

  6. Dimensionless numbers and correlating equations for the analysis of the membrane-gas diffusion electrode assembly in polymer electrolyte fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Gyenge, E.L. [Department of Chemical and Biological Engineering, The University of British Columbia, 2216 Main Mall, Vancouver, BC (Canada V6T 1Z4)

    2005-12-01

    The Quraishi-Fahidy method [Can. J. Chem. Eng. 59 (1981) 563] was employed to derive characteristic dimensionless numbers for the membrane-electrolyte, cathode catalyst layer and gas diffuser, respectively, based on the model presented by Bernardi and Verbrugge for polymer electrolyte fuel cells [AIChE J. 37 (1991) 1151]. Monomial correlations among dimensionless numbers were developed and tested against experimental and mathematical modeling results. Dimensionless numbers comparing the bulk and surface-convective ionic conductivities, the electric and viscous forces and the current density and the fixed surface charges, were employed to describe the membrane ohmic drop and its non-linear dependence on current density due to membrane dehydration. The analysis of the catalyst layer yielded electrode kinetic equivalents of the second Damkohler number and Thiele modulus, influencing the penetration depth of the oxygen reduction front based on the pseudohomogeneous film model. The correlating equations for the catalyst layer could describe in a general analytical form, all the possible electrode polarization scenarios such as electrode kinetic control coupled or not with ionic and/or oxygen mass transport limitation. For the gas diffusion-backing layer correlations are presented in terms of the Nusselt number for mass transfer in electrochemical systems. The dimensionless number-based correlating equations for the membrane electrode assembly (MEA) could provide a practical approach to quantify single-cell polarization results obtained under a variety of experimental conditions and to implement them in models of the fuel cell stack. (author)

  7. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  8. Pion interferometric tests of transport models

    Energy Technology Data Exchange (ETDEWEB)

    Padula, S.S.; Gyulassy, M.; Gavin, S. (Lawrence Berkeley Lab., CA (USA). Nuclear Science Div.)

    1990-01-08

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.).

  9. Pion interferometric tests of transport models

    International Nuclear Information System (INIS)

    Padula, S.S.; Gyulassy, M.; Gavin, S.

    1990-01-01

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.)

  10. A Note on the Correlated Random Coefficient Model

    DEFF Research Database (Denmark)

    Kolodziejczyk, Christophe

    In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...

  11. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  12. Evaluating Multispectral Snowpack Reflectivity With Changing Snow Correlation Lengths

    Science.gov (United States)

    Kang, Do Hyuk; Barros, Ana P.; Kim, Edward J.

    2016-01-01

    This study investigates the sensitivity of multispectral reflectivity to changing snow correlation lengths. Matzler's ice-lamellae radiative transfer model was implemented and tested to evaluate the reflectivity of snow correlation lengths at multiple frequencies from the ultraviolet (UV) to the microwave bands. The model reveals that, in the UV to infrared (IR) frequency range, the reflectivity and correlation length are inversely related, whereas reflectivity increases with snow correlation length in the microwave frequency range. The model further shows that the reflectivity behavior can be mainly attributed to scattering rather than absorption for shallow snowpacks. The largest scattering coefficients and reflectivity occur at very small correlation lengths (approximately 10(exp -5 m) for frequencies higher than the IR band. In the microwave range, the largest scattering coefficients are found at millimeter wavelengths. For validation purposes, the ice-lamella model is coupled with a multilayer snow physics model to characterize the reflectivity response of realistic snow hydrological processes. The evolution of the coupled model simulated reflectivities in both the visible and the microwave bands is consistent with satellite-based reflectivity observations in the same frequencies. The model results are also compared with colocated in situ snow correlation length measurements (Cold Land Processes Field Experiment 2002-2003). The analysis and evaluation of model results indicate that the coupled multifrequency radiative transfer and snow hydrology modeling system can be used as a forward operator in a data-assimilation framework to predict the status of snow physical properties, including snow correlation length.

  13. Analysis of two-phase flow velocity measurements by cross-correlation techniques and the applicability of the drift flux model for their interpretation

    International Nuclear Information System (INIS)

    Analytis, G.Th.; Luebbesmeyer, D.

    1982-11-01

    An extensive and detailed investigation of two-phase flow velocity measurements by cross-correlating noise signals of information carriers (neutrons, gammas, visible light) modulated by the two-phase flow and registered by two axially placed detectors outside the flow is pursued. To this end, a detailed analysis of velocity measurements in experimental loops and a large number of velocity measurements in a commercial BWR is undertaken, and the applicability and limitations of the drift flux model for their interpretation is investigated. On the basis of this extensive analysis, the authors propose a physically plausible explanation for the deviations in the upper part of the core, expound on why the drift flux model is, to a great extent, not suitable for interpreting two-phase flow velocity measurements by cross-correlation techniques reported in the present work, and conclude that due to the large number of uncertainties and the lack of detailed knowledge about the kind of microstructures of the flow which the detectors prefer to ''sample'', one can safely assume that at least in the lower half of the core the velocity measured can be well approximated by the velocity of the centre of volume, from which the mass fluxes can readily be computed. (Auth.)

  14. Higher genus correlators from the hermitian one-matrix model

    International Nuclear Information System (INIS)

    Ambjoern, J.; Chekhov, L.; Makeenko, Yu.

    1992-01-01

    We develop an iterative algorithm for the genus expansion of the hermitian NxN one-matrix model (is the Penner model in an external field). By introducing moments of the external field, we prove that the genus g contribution to the m-loop correlator depends only on 3g-2+m lower moments (3g-2 for the partition function). We present the explicit results for the partition function and the one-loop correlator in genus one. We compare the correlators for the hermitian one-matrix model with those at zero momenta for c=1 CFT and show an agreement of the one-loop correlators for genus zero. (orig.)

  15. Analysis of Correlation in MEMS Gyroscope Array and its Influence on Accuracy Improvement for the Combined Angular Rate Signal

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-01-01

    Full Text Available Obtaining a correlation factor is a prerequisite for fusing multiple outputs of a mircoelectromechanical system (MEMS gyroscope array and evaluating accuracy improvement. In this paper, a mathematical statistics method is established to analyze and obtain the practical correlation factor of a MEMS gyroscope array, which solves the problem of determining the Kalman filter (KF covariance matrix Q and fusing the multiple gyroscope signals. The working principle and mathematical model of the sensor array fusion is briefly described, and then an optimal estimate of input rate signal is achieved by using of a steady-state KF gain in an off-line estimation approach. Both theoretical analysis and simulation show that the negative correlation factor has a favorable influence on accuracy improvement. Additionally, a four-gyro array system composed of four discrete individual gyroscopes was developed to test the correlation factor and its influence on KF accuracy improvement. The result showed that correlation factors have both positive and negative values; in particular, there exist differences for correlation factor between the different units in the array. The test results also indicated that the Angular Random Walk (ARW of 1.57°/h0.5 and bias drift of 224.2°/h for a single gyroscope were reduced to 0.33°/h0.5 and 47.8°/h with some negative correlation factors existing in the gyroscope array, making a noise reduction factor of about 4.7, which is higher than that of a uncorrelated four-gyro array. The overall accuracy of the combined angular rate signal can be further improved if the negative correlation factors in the gyroscope array become larger.

  16. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  17. Numerical development of a new correlation between biaxial fracture strain and material fracture toughness for small punch test

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Pradeep [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Dutta, B.K., E-mail: bijon.dutta@gmail.com [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Chattopadhyay, J. [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Reactor Safety Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)

    2017-04-01

    The miniaturized specimens are used to determine mechanical properties of the materials, such as yield stress, ultimate stress, fracture toughness etc. Use of such specimens is essential whenever limited quantity of material is available for testing, such as aged/irradiated materials. The miniaturized small punch test (SPT) is a technique which is widely used to determine change in mechanical properties of the materials. Various empirical correlations are proposed in the literature to determine the value of fracture toughness (J{sub IC}) using this technique. bi-axial fracture strain is determined using SPT tests. This parameter is then used to determine J{sub IC} using available empirical correlations. The correlations between J{sub IC} and biaxial fracture strain quoted in the literature are based on experimental data acquired for large number of materials. There are number of such correlations available in the literature, which are generally not in agreement with each other. In the present work, an attempt has been made to determine the correlation between biaxial fracture strain (ε{sub qf}) and crack initiation toughness (J{sub i}) numerically. About one hundred materials are digitally generated by varying yield stress, ultimate stress, hardening coefficient and Gurson parameters. Such set of each material is then used to analyze a SPT specimen and a standard TPB specimen. Analysis of SPT specimen generated biaxial fracture strain (ε{sub qf}) and analysis of TPB specimen generated value of J{sub i}. A graph is then plotted between these two parameters for all the digitally generated materials. The best fit straight line determines the correlation. It has been also observed that it is possible to have variation in J{sub i} for the same value of biaxial fracture strain (ε{sub qf}) within a limit. Such variation in the value of J{sub i} has been also ascertained using the graph. Experimental SPT data acquired earlier for three materials were then used to get J

  18. Testing general relativity at cosmological scales: Implementation and parameter correlations

    International Nuclear Information System (INIS)

    Dossett, Jason N.; Ishak, Mustapha; Moldenhauer, Jacob

    2011-01-01

    The testing of general relativity at cosmological scales has become a possible and timely endeavor that is not only motivated by the pressing question of cosmic acceleration but also by the proposals of some extensions to general relativity that would manifest themselves at large scales of distance. We analyze here correlations between modified gravity growth parameters and some core cosmological parameters using the latest cosmological data sets including the refined Cosmic Evolution Survey 3D weak lensing. We provide the parametrized modified growth equations and their evolution. We implement known functional and binning approaches, and propose a new hybrid approach to evolve the modified gravity parameters in redshift (time) and scale. The hybrid parametrization combines a binned redshift dependence and a smooth evolution in scale avoiding a jump in the matter power spectrum. The formalism developed to test the consistency of current and future data with general relativity is implemented in a package that we make publicly available and call ISiTGR (Integrated Software in Testing General Relativity), an integrated set of modified modules for the publicly available packages CosmoMC and CAMB, including a modified version of the integrated Sachs-Wolfe-galaxy cross correlation module of Ho et al. and a new weak-lensing likelihood module for the refined Hubble Space Telescope Cosmic Evolution Survey weak gravitational lensing tomography data. We obtain parameter constraints and correlation coefficients finding that modified gravity parameters are significantly correlated with σ 8 and mildly correlated with Ω m , for all evolution methods. The degeneracies between σ 8 and modified gravity parameters are found to be substantial for the functional form and also for some specific bins in the hybrid and binned methods indicating that these degeneracies will need to be taken into consideration when using future high precision data.

  19. Risk of false decision on conformity of a multicomponent material when test results of the components' content are correlated.

    Science.gov (United States)

    Kuselman, Ilya; Pennecchi, Francesca R; da Silva, Ricardo J N B; Hibbert, D Brynn

    2017-11-01

    The probability of a false decision on conformity of a multicomponent material due to measurement uncertainty is discussed when test results are correlated. Specification limits of the components' content of such a material generate a multivariate specification interval/domain. When true values of components' content and corresponding test results are modelled by multivariate distributions (e.g. by multivariate normal distributions), a total global risk of a false decision on the material conformity can be evaluated based on calculation of integrals of their joint probability density function. No transformation of the raw data is required for that. A total specific risk can be evaluated as the joint posterior cumulative function of true values of a specific batch or lot lying outside the multivariate specification domain, when the vector of test results, obtained for the lot, is inside this domain. It was shown, using a case study of four components under control in a drug, that the correlation influence on the risk value is not easily predictable. To assess this influence, the evaluated total risk values were compared with those calculated for independent test results and also with those assuming much stronger correlation than that observed. While the observed statistically significant correlation did not lead to a visible difference in the total risk values in comparison to the independent test results, the stronger correlation among the variables caused either the total risk decreasing or its increasing, depending on the actual values of the test results. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Subchannel analysis of a critical power test, using simulated BWR 8x8 fuel assembly

    International Nuclear Information System (INIS)

    Mitsutake, T.; Terasaka, H.; Yoshimura, K.; Oishi, M.; Inoue, A.; Akiyama, M.

    1990-01-01

    Critical power predictions have been compared with the critical power test data obtained in simulated BWR 8x8 fuel rod assemblies. Two analytical methods for the critical power prediction in rod assemblies are used in the prediction, which are the subchannel analysis using the COBRA/BWR subchannel computer code with empirical critical heat flux (CHF) correlations and the liquid film dryout estimation using the CRIPP-3F 'multi-fluid' computer code. Improvements in both the analytical methods were made for spacer effect modeling, though they were specific for application to the current BWR rod assembly type. In general a reasonable agreement was obtained, though comparisons, between the prediction and the obtained test data. (orig.)

  1. Meta-DiSc: a software for meta-analysis of test accuracy data.

    Science.gov (United States)

    Zamora, Javier; Abraira, Victor; Muriel, Alfonso; Khan, Khalid; Coomarasamy, Arri

    2006-07-12

    Systematic reviews and meta-analyses of test accuracy studies are increasingly being recognised as central in guiding clinical practice. However, there is currently no dedicated and comprehensive software for meta-analysis of diagnostic data. In this article, we present Meta-DiSc, a Windows-based, user-friendly, freely available (for academic use) software that we have developed, piloted, and validated to perform diagnostic meta-analysis. Meta-DiSc a) allows exploration of heterogeneity, with a variety of statistics including chi-square, I-squared and Spearman correlation tests, b) implements meta-regression techniques to explore the relationships between study characteristics and accuracy estimates, c) performs statistical pooling of sensitivities, specificities, likelihood ratios and diagnostic odds ratios using fixed and random effects models, both overall and in subgroups and d) produces high quality figures, including forest plots and summary receiver operating characteristic curves that can be exported for use in manuscripts for publication. All computational algorithms have been validated through comparison with different statistical tools and published meta-analyses. Meta-DiSc has a Graphical User Interface with roll-down menus, dialog boxes, and online help facilities. Meta-DiSc is a comprehensive and dedicated test accuracy meta-analysis software. It has already been used and cited in several meta-analyses published in high-ranking journals. The software is publicly available at http://www.hrc.es/investigacion/metadisc_en.htm.

  2. Canonical correlation analysis and Wiener-Granger causality tests : Useful tools for the specification of VAR models

    NARCIS (Netherlands)

    Horvath, C.; Leeflang, P.S.H.; Otter, P.W.

    Dynamic multivariate models ha e become popular in analyzing the behavior of competitive marketing systems because they are capable of incorporating all the relationships in a competitive marketing environment. In this paper we consider VAR models, the most frequently used dynamic multivariate

  3. Parametric Analysis of Flexible Logic Control Model

    Directory of Open Access Journals (Sweden)

    Lihua Fu

    2013-01-01

    Full Text Available Based on deep analysis about the essential relation between two input variables of normal two-dimensional fuzzy controller, we used universal combinatorial operation model to describe the logic relationship and gave a flexible logic control method to realize the effective control for complex system. In practical control application, how to determine the general correlation coefficient of flexible logic control model is a problem for further studies. First, the conventional universal combinatorial operation model has been limited in the interval [0,1]. Consequently, this paper studies a kind of universal combinatorial operation model based on the interval [a,b]. And some important theorems are given and proved, which provide a foundation for the flexible logic control method. For dealing reasonably with the complex relations of every factor in complex system, a kind of universal combinatorial operation model with unequal weights is put forward. Then, this paper has carried out the parametric analysis of flexible logic control model. And some research results have been given, which have important directive to determine the values of the general correlation coefficients in practical control application.

  4. Gait Correlation Analysis Based Human Identification

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available Human gait identification aims to identify people by a sequence of walking images. Comparing with fingerprint or iris based identification, the most important advantage of gait identification is that it can be done at a distance. In this paper, silhouette correlation analysis based human identification approach is proposed. By background subtracting algorithm, the moving silhouette figure can be extracted from the walking images sequence. Every pixel in the silhouette has three dimensions: horizontal axis (x, vertical axis (y, and temporal axis (t. By moving every pixel in the silhouette image along these three dimensions, we can get a new silhouette. The correlation result between the original silhouette and the new one can be used as the raw feature of human gait. Discrete Fourier transform is used to extract features from this correlation result. Then, these features are normalized to minimize the affection of noise. Primary component analysis method is used to reduce the features’ dimensions. Experiment based on CASIA database shows that this method has an encouraging recognition performance.

  5. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  6. The quadratic relationship between difficulty of intelligence test items and their correlations with working memory

    Directory of Open Access Journals (Sweden)

    Tomasz eSmoleń

    2015-08-01

    Full Text Available Fluid intelligence (Gf is a crucial cognitive ability that involves abstract reasoning in order to solve novel problems. Recent research demonstrated that Gf strongly depends on the individual effectiveness of working memory (WM. We investigated a popular claim that if the storage capacity underlay the WM-Gf correlation, then such a correlation should increase with an increasing number of items or rules (load in a Gf test. As often no such link is observed, on that basis the storage-capacity account is rejected, and alternative accounts of Gf (e.g., related to executive control or processing speed are proposed. Using both analytical inference and numerical simulations, we demonstrated that the load-dependent change in correlation is primarily a function of the amount of floor/ceiling effect for particular items. Thus, the item-wise WM correlation of a Gf test depends on its overall difficulty, and the difficulty distribution across its items. When the early test items yield huge ceiling, but the late items do not approach floor, that correlation will increase throughout the test. If the early items locate themselves between ceiling and floor, but the late items approach floor, the respective correlation will decrease. For a hallmark Gf test, the Raven test, whose items span from ceiling to floor, the quadratic relationship is expected, and it was shown empirically using a large sample and two types of WMC tasks. In consequence, no changes in correlation due to varying WM/Gf load, or lack of them, can yield an argument for or against any theory of WM/Gf. Moreover, as the mathematical properties of the correlation formula make it relatively immune to ceiling/floor effects for overall moderate correlations, only minor changes (if any in the WM-Gf correlation should be expected for many psychological tests.

  7. The quadratic relationship between difficulty of intelligence test items and their correlations with working memory.

    Science.gov (United States)

    Smolen, Tomasz; Chuderski, Adam

    2015-01-01

    Fluid intelligence (Gf) is a crucial cognitive ability that involves abstract reasoning in order to solve novel problems. Recent research demonstrated that Gf strongly depends on the individual effectiveness of working memory (WM). We investigated a popular claim that if the storage capacity underlay the WM-Gf correlation, then such a correlation should increase with an increasing number of items or rules (load) in a Gf-test. As often no such link is observed, on that basis the storage-capacity account is rejected, and alternative accounts of Gf (e.g., related to executive control or processing speed) are proposed. Using both analytical inference and numerical simulations, we demonstrated that the load-dependent change in correlation is primarily a function of the amount of floor/ceiling effect for particular items. Thus, the item-wise WM correlation of a Gf-test depends on its overall difficulty, and the difficulty distribution across its items. When the early test items yield huge ceiling, but the late items do not approach floor, that correlation will increase throughout the test. If the early items locate themselves between ceiling and floor, but the late items approach floor, the respective correlation will decrease. For a hallmark Gf-test, the Raven-test, whose items span from ceiling to floor, the quadratic relationship is expected, and it was shown empirically using a large sample and two types of WMC tasks. In consequence, no changes in correlation due to varying WM/Gf load, or lack of them, can yield an argument for or against any theory of WM/Gf. Moreover, as the mathematical properties of the correlation formula make it relatively immune to ceiling/floor effects for overall moderate correlations, only minor changes (if any) in the WM-Gf correlation should be expected for many psychological tests.

  8. Weak diffusion limits of dynamic conditional correlation models

    DEFF Research Database (Denmark)

    Hafner, Christian M.; Laurent, Sebastien; Violante, Francesco

    The properties of dynamic conditional correlation (DCC) models are still not entirely understood. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized...... by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a non-degenerate diffusion limit can be obtained. Alternative sets of conditions are considered...

  9. Meta-Analysis of Correlations Among Usability Measures

    DEFF Research Database (Denmark)

    Hornbæk, Kasper Anders Søren; Effie Lai Chong, Law

    2007-01-01

    are generally low: effectiveness measures (e.g., errors) and efficiency measures (e.g., time) has a correlation of .247 ± .059 (Pearson's product-moment correlation with 95% confidence interval), efficiency and satisfaction (e.g., preference) one of .196 ± .064, and effectiveness and satisfaction one of .164......Understanding the relation between usability measures seems crucial to deepen our conception of usability and to select the right measures for usability studies. We present a meta-analysis of correlations among usability measures calculated from the raw data of 73 studies. Correlations...... ± .062. Changes in task complexity do not influence these correlations, but use of more complex measures attenuates them. Standard questionnaires for measuring satisfaction appear more reliable than homegrown ones. Measures of users' perceptions of phenomena are generally not correlated with objective...

  10. Projection of Anthropometric Correlation for Virtual Population Modelling

    DEFF Research Database (Denmark)

    Rasmussen, John; Waagepetersen, Rasmus Plenge; Rasmussen, Kasper Pihl

    2018-01-01

    , and therefore the correlations between parameters, are not accessible. This problem is solved by projecting correlation from a data set for which raw data are provided. The method is tested and validated by generation of pseudo females from males in the ANSUR anthropometric dataset. Results show...

  11. TRACG post-test analysis of panthers prototype tests of SBWR passive containment condenser

    International Nuclear Information System (INIS)

    Fitch, J.R.; Billig, P.F.; Abdollahian, D.; Masoni, P.

    1997-01-01

    As part of the validation effort for application of the TRACG code to the Simplified Boiling Water Reactor (SBWR), calculations have been performed for the various test facilities which are part of the SBWR design and technology certification program. These calculations include post-test calculations for tests in the PANTHERS Passive Containment Condenser (PCC) test program. Sixteen tests from the PANTHERS/PCC test matrix were selected for post-test analysis. This set includes three steady-state pure-steam tests, nine steady-state steam-air tests, and four transient tests. The purpose of this paper is to present and discuss the results of the post-test analysis. The author includes a brief description of the PANTHERS/PCC test facility and test matrix, a description of the PANTHERS/PCC post-test TRACG model and the manner in which the various types of tests in the post-test evaluation were simulated, and a presentation of the results of the TRACG simulation

  12. Nanoscale protein diffusion by STED-based pair correlation analysis.

    Directory of Open Access Journals (Sweden)

    Paolo Bianchini

    Full Text Available We describe for the first time the combination between cross-pair correlation function analysis (pair correlation analysis or pCF and stimulated emission depletion (STED to obtain diffusion maps at spatial resolution below the optical diffraction limit (super-resolution. Our approach was tested in systems characterized by high and low signal to noise ratio, i.e. Capsid Like Particles (CLPs bearing several (>100 active fluorescent proteins and monomeric fluorescent proteins transiently expressed in living Chinese Hamster Ovary cells, respectively. The latter system represents the usual condition encountered in living cell studies on fluorescent protein chimeras. Spatial resolution of STED-pCF was found to be about 110 nm, with a more than twofold improvement over conventional confocal acquisition. We successfully applied our method to highlight how the proximity to nuclear envelope affects the mobility features of proteins actively imported into the nucleus in living cells. Remarkably, STED-pCF unveiled the existence of local barriers to diffusion as well as the presence of a slow component at distances up to 500-700 nm from either sides of nuclear envelope. The mobility of this component is similar to that previously described for transport complexes. Remarkably, all these features were invisible in conventional confocal mode.

  13. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  14. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters.

    Science.gov (United States)

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-21

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.

  15. Correlation analysis in chemistry: recent advances

    National Research Council Canada - National Science Library

    Shorter, John; Chapman, Norman Bellamy

    1978-01-01

    ..., and applications of LFER to polycyclic arenes, heterocyclic compounds, and olefinic systems. Of particular interest is the extensive critical compilation of substituent constants and the numerous applications of correlation analysis to spectroscopy...

  16. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    Science.gov (United States)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    A drone aircraft equipped with an active flutter suppression system is considered with emphasis on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are given for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. The mathematical models are included and existing analytical techniques are described as well as an alternative analytical technique for obtaining closed-loop results.

  17. CORRELATION OF ULTRASOUND (USG FINDINGS WITH SEROLOGICAL TESTS IN DENGUE FEVER

    Directory of Open Access Journals (Sweden)

    Dayanand

    2016-02-01

    Full Text Available INTRODUCTION Dengue is an endemic and epidemic disease of the tropical and subtropical regions. Between September & October 2012, there was an established outbreak of dengue in Hoskote, near Bangalore. Dengue results in serositis, which can be imaged by ultrasonography. OBJECTIVE To correlate the USG findings with the serological tests in paediatric and adult patients. MATERIALS AND METHODS 110 patients with clinical suspicion of dengue fever during the above period underwent serological tests-NS1, IgM and IgG and were evaluated with USG of the abdomen and thorax. The USG findings were correlated with serological tests. RESULTS 67 Patients were seropositive, 43 were seronegative. The USG findings in seropositive paediatric patients (n=32 and adult patients (n=35 respectively were gall bladder (GB wall edema-27 & 31, hepatomegaly-12 &14, ascites-16 & 12, splenomegaly- 15 & 9, right pleural effusion-14 & 13, left and bilateral pleural effusion-7 & 5. CONCLUSION In our study GB wall edema significantly correlated with seropositivity (p value=0.032. Thus ultrasound is an efficient screening tool in a case of dengue outbreak.

  18. Spreading of correlations in the Falicov-Kimball model

    Science.gov (United States)

    Herrmann, Andreas J.; Antipov, Andrey E.; Werner, Philipp

    2018-04-01

    We study dynamical properties of the one- and two-dimensional Falicov-Kimball model using lattice Monte Carlo simulations. In particular, we calculate the spreading of charge correlations in the equilibrium model and after an interaction quench. The results show a reduction of the light-cone velocity with interaction strength at low temperature, while the phase velocity increases. At higher temperature, the initial spreading is determined by the Fermi velocity of the noninteracting system and the maximum range of the correlations decreases with increasing interaction strength. Charge order correlations in the disorder potential enhance the range of the correlations. We also use the numerically exact lattice Monte Carlo results to benchmark the accuracy of equilibrium and nonequilibrium dynamical cluster approximation calculations. It is shown that the bias introduced by the mapping to a periodized cluster is substantial, and that from a numerical point of view, it is more efficient to simulate the lattice model directly.

  19. Models and correlations of the DEBRIS Late-Phase Melt Progression Model

    International Nuclear Information System (INIS)

    Schmidt, R.C.; Gasser, R.D.

    1997-09-01

    The DEBRIS Late Phase Melt Progression Model is an assembly of models, embodied in a computer code, which is designed to treat late-phase melt progression in dry rubble (or debris) regions that can form as a consequence of a severe core uncover accident in a commercial light water nuclear reactor. The approach is fully two-dimensional, and incorporates a porous medium modeling framework together with conservation and constitutive relationships to simulate the time-dependent evolution of such regions as various physical processes act upon the materials. The objective of the code is to accurately model these processes so that the late-phase melt progression that would occur in different hypothetical severe nuclear reactor accidents can be better understood and characterized. In this report the models and correlations incorporated and used within the current version of DEBRIS are described. These include the global conservation equations solved, heat transfer and fission heating models, melting and refreezing models (including material interactions), liquid and solid relocation models, gas flow and pressure field models, and the temperature and compositionally dependent material properties employed. The specific models described here have been used in the experiment design analysis of the Phebus FPT-4 debris-bed fission-product release experiment. An earlier DEBRIS code version was used to analyze the MP-1 and MP-2 late-phase melt progression experiments conducted at Sandia National Laboratories for the US Nuclear Regulatory Commission

  20. On the possibility of extending the tests of quantum mechanical correlations

    International Nuclear Information System (INIS)

    Bergia, S.

    1984-01-01

    Experimental tests of quantum mechanical correlations in connection with Bell's inequality have generally considered decays in sub-systems characterized by two-valued observables. The author analyses the possibility of extending these tests to a much wider class of cases. (Auth.)

  1. Effects of Perfectly Correlated and Anti-Correlated Noise in a Logistic Growth Model

    International Nuclear Information System (INIS)

    Zhang Li; Cao Li

    2011-01-01

    The logistic growth model with correlated additive and multiplicative Gaussian white noise is used to analyze tumor cell population. The effects of perfectly correlated and anti-correlated noise on the stationary properties of tumor cell population are studied. As in both cases the diffusion coefficient has zero point in real number field, some special features of the system are arisen. It is found that in both cases, the increase of the multiplicative noise intensity cause tumor cell extinction. In the perfectly anti-correlated case, the stationary probability distribution as a function of tumor cell population exhibit two extrema. (general)

  2. Bose-Einstein correlation in Landau's model

    International Nuclear Information System (INIS)

    Hama, Y.; Padula, S.S.

    1986-01-01

    Bose-Einstein correlation is studied by taking an expanding fluid given by Landau's model as the source, where each space-time point is considered as an independent and chaotic emitting center with Planck's spectral distribution. As expected, the correlation depends on the relative angular positions as well as on the overall localization of the measuring system and it turns out that the average dimension of the source increases with the multiplicity N/sub ch/

  3. Mirroring the self: testing neurophysiological correlates of disturbed self-experience in schizophrenia spectrum.

    Science.gov (United States)

    Sestito, Mariateresa; Raballo, Andrea; Umiltà, Maria Alessandra; Leuci, Emanuela; Tonna, Matteo; Fortunati, Renata; De Paola, Giancarlo; Amore, Mario; Maggini, Carlo; Gallese, Vittorio

    2015-01-01

    Self-disorders (SDs) have been described as a core schizophrenia spectrum vulnerability phenotype, both in classic and contemporary psychopathological literature. However, such a core phenotype has not yet been investigated adopting a trans-domain approach that combines the phenomenological and the neurophysiological levels of analysis. The aim of this study is to investigate the relation between SDs and subtle, schizophrenia-specific impairments of emotional resonance that are supposed to reflect abnormalities in the mirror neurons mechanism. Specifically, we tested whether electromyographic response to emotional stimuli (i.e. a proxy for subtle changes in facial mimicry and related motor resonance mechanisms) would predict the occurrence of anomalous subjective experiences (i.e. SDs). Eighteen schizophrenia spectrum (SzSp) patients underwent a comprehensive psychopathological examination and were contextually tested with a multimodal paradigm, recording facial electromyographic activity of muscles in response to positive and negative emotional stimuli. Experiential anomalies were explored with the Bonn Scale for the Assessment of Basic Symptoms (BSABS) and then condensed into rational subscales mapping SzSp anomalous self-experiences. SzSp patients showed an imbalance in emotional motor resonance with a selective bias toward negative stimuli, as well as a multisensory integration impairment. Multiple regression analysis showed that electromyographic facial reactions in response to negative stimuli presented in auditory modality specifically and strongly correlated with SD subscore. The study confirms the potential of SDs as target phenotype for neurobiological research and encourages research into disturbed motor/emotional resonance as possible body-level correlate of disturbed subjective experiences in SzSp.

  4. Correlation development between indentation parameters and uniaxial compressive strength for Colombian sandstones

    International Nuclear Information System (INIS)

    Mateus, Jefferson; Saavedra, Nestor Fernando; Calderon Carrillo, Zuly; Mateus, Darwin

    2007-01-01

    A new way to characterize the perforated formation strength has been implemented using the indentation test. This test can be performed on irregular cuttings mounted in acrylic resins forming a disc. The test consists of applying load on each sample by means of a flat and indenter. A graph of the load applied VS penetration of the indenter is developed, and the modules of the test, denominated indentation modulus (IM) and Critical Transition Force (CTF) are obtained (Ringstad et al., 1998). Based on the success of previous studies we developed correlations between indentation and mechanical properties for some Colombian sandstone. These correlations were obtained using o set of 248 indentation tests and separate compression fasts on parallel sandstone samples from the same depth. This analysis includes Barco Formation, Mirador Formation, and Tambor Formation. For the correlations, IM-UCS and CTF-UCS, the correlation coefficient is 0.81 and 0.70 respectively. The use of the correlations and the indentation test is helpful for in-situ calibration of the geomechanical models since the indentation test can be performed in real time thus reducing costs and time associated with delayed conventional characterization

  5. Analysis of light particles correlation selected by neutron calorimetry in the reaction 208 Pb+93 Nb at 29 MeV/u

    International Nuclear Information System (INIS)

    Ghisalberti, C.

    1994-01-01

    This work deals with the analysis of light particles correlation selected by neutrons calorimetry in the reaction : 208 Pb+ 93 Nb at 29 MeV/u. In the first part are described the interest of correlation functions, the proton-proton correlation function study, the classical model developed for describing the correlations of two light particles emitted by a nucleus in thermal equilibrium, the quantum model and some notions about exclusive sources and measures. The second part is a description of the experience : 208 Pb+ 93 Nb at 29 MeV/u. The analysis of experimental data and of experimental correlation functions are given respectively in the third and the fourth parts. (O.L.). 38 refs., 82 figs., 11 tabs

  6. A simple model for cell type recognition using 2D-correlation analysis of FTIR images from breast cancer tissue

    Science.gov (United States)

    Ali, Mohamed H.; Rakib, Fazle; Al-Saad, Khalid; Al-Saady, Rafif; Lyng, Fiona M.; Goormaghtigh, Erik

    2018-07-01

    Breast cancer is the second most common cancer after lung cancer. So far, in clinical practice, most cancer parameters originating from histopathology rely on the visualization by a pathologist of microscopic structures observed in stained tissue sections, including immunohistochemistry markers. Fourier transform infrared spectroscopy (FTIR) spectroscopy provides a biochemical fingerprint of a biopsy sample and, together with advanced data analysis techniques, can accurately classify cell types. Yet, one of the challenges when dealing with FTIR imaging is the slow recording of the data. One cm2 tissue section requires several hours of image recording. We show in the present paper that 2D covariance analysis singles out only a few wavenumbers where both variance and covariance are large. Simple models could be built using 4 wavenumbers to identify the 4 main cell types present in breast cancer tissue sections. Decision trees provide particularly simple models to reach discrimination between the 4 cell types. The robustness of these simple decision-tree models were challenged with FTIR spectral data obtained using different recording conditions. One test set was recorded by transflection on tissue sections in the presence of paraffin while the training set was obtained on dewaxed tissue sections by transmission. Furthermore, the test set was collected with a different brand of FTIR microscope and a different pixel size. Despite the different recording conditions, separating extracellular matrix (ECM) from carcinoma spectra was 100% successful, underlying the robustness of this univariate model and the utility of covariance analysis for revealing efficient wavenumbers. We suggest that 2D covariance maps using the full spectral range could be most useful to select the interesting wavenumbers and achieve very fast data acquisition on quantum cascade laser infrared imaging microscopes.

  7. Structured sparse canonical correlation analysis for brain imaging genetics: an improved GraphNet method.

    Science.gov (United States)

    Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2016-05-15

    Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Prediction of East African Seasonal Rainfall Using Simplex Canonical Correlation Analysis.

    Science.gov (United States)

    Ntale, Henry K.; Yew Gan, Thian; Mwale, Davison

    2003-06-01

    A linear statistical model, canonical correlation analysis (CCA), was driven by the Nelder-Mead simplex optimization algorithm (called CCA-NMS) to predict the standardized seasonal rainfall totals of East Africa at 3-month lead time using SLP and SST anomaly fields of the Indian and Atlantic Oceans combined together by 24 simplex optimized weights, and then `reduced' by the principal component analysis. Applying the optimized weights to the predictor fields produced better March-April-May (MAM) and September-October-November (SON) seasonal rain forecasts than a direct application of the same, unweighted predictor fields to CCA at both calibration and validation stages. Northeastern Tanzania and south-central Kenya had the best SON prediction results with both validation correlation and Hanssen-Kuipers skill scores exceeding +0.3. The MAM season was better predicted in the western parts of East Africa. The CCA correlation maps showed that low SON rainfall in East Africa is associated with cold SSTs off the Somali coast and the Benguela (Angola) coast, and low MAM rainfall is associated with a buildup of low SSTs in the Indian Ocean adjacent to East Africa and the Gulf of Guinea.

  9. Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation

    Science.gov (United States)

    Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.

    A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.

  10. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  11. Visuospatial Aptitude Testing Differentially Predicts Simulated Surgical Skill.

    Science.gov (United States)

    Hinchcliff, Emily; Green, Isabel; Destephano, Christopher; Cox, Mary; Smink, Douglas; Kumar, Amanika; Hokenstad, Erik; Bengtson, Joan; Cohen, Sarah

    2018-02-05

    To determine if visuospatial perception (VSP) testing is correlated to simulated or intraoperative surgical performance as rated by the American College of Graduate Medical Education (ACGME) milestones. Classification II-2 SETTING: Two academic training institutions PARTICIPANTS: 41 residents, including 19 Brigham and Women's Hospital and 22 Mayo Clinic residents from three different specialties (OBGYN, general surgery, urology). Participants underwent three different tests: visuospatial perception testing (VSP), Fundamentals of Laparoscopic Surgery (FLS®) peg transfer, and DaVinci robotic simulation peg transfer. Surgical grading from the ACGME milestones tool was obtained for each participant. Demographic and subject background information was also collected including specialty, year of training, prior experience with simulated skills, and surgical interest. Standard statistical analysis using Student's t test were performed, and correlations were determined using adjusted linear regression models. In univariate analysis, BWH and Mayo training programs differed in both times and overall scores for both FLS® peg transfer and DaVinci robotic simulation peg transfer (p<0.05 for all). Additionally, type of residency training impacted time and overall score on robotic peg transfer. Familiarity with tasks correlated with higher score and faster task completion (p= 0.05 for all except VSP score). There was no difference in VSP scores by program, specialty, or year of training. In adjusted linear regression modeling, VSP testing was correlated only to robotic peg transfer skills (average time p=0.006, overall score p=0.001). Milestones did not correlate to either VSP or surgical simulation testing. VSP score was correlated with robotic simulation skills but not with FLS skills or ACGME milestones. This suggests that the ability of VSP score to predict competence differs between tasks. Therefore, further investigation is required into aptitude testing, especially prior

  12. Status of the Correlation Process of the V-HAB Simulation with Ground Tests and ISS Telemetry Data

    Science.gov (United States)

    Ploetner, P.; Roth, C.; Zhukov, A.; Czupalla, M.; Anderson, M.; Ewert, M.

    2013-01-01

    The Virtual Habitat (V-HAB) is a dynamic Life Support System (LSS) simulation, created for investigation of future human spaceflight missions. It provides the capability to optimize LSS during early design phases. The focal point of the paper is the correlation and validation of V-HAB against ground test and flight data. In order to utilize V-HAB to design an Environmental Control and Life Support System (ECLSS) it is important to know the accuracy of simulations, strengths and weaknesses. Therefore, simulations of real systems are essential. The modeling of the International Space Station (ISS) ECLSS in terms of single technologies as well as an integrated system and correlation against ground and flight test data is described. The results of the simulations make it possible to prove the approach taken by V-HAB.

  13. Model-Driven Test Generation of Distributed Systems

    Science.gov (United States)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  14. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  15. A Combined Reliability Model of VSC-HVDC Connected Offshore Wind Farms Considering Wind Speed Correlation

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    and WTGs outage. The wind speed correlation between different WFs is included in the two-dimensional multistate WF model by using an improved k-means clustering method. Then, the entire system with two WFs and a threeterminal VSC-HVDC system is modeled as a multi-state generation unit. The proposed model...... is applied to the Roy Billinton test system (RBTS) for adequacy studies. Both the probability and frequency indices are calculated. The effectiveness and accuracy of the combined model is validated by comparing results with the sequential Monte Carlo simulation (MCS) method. The effects of the outage of VSC-HVDC...... system and wind speed correlation on the system reliability were analyzed. Sensitivity analyses were conducted to investigate the impact of repair time of the offshore VSC-HVDC system on system reliability....

  16. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Yixiong Feng

    2017-03-01

    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  17. Correlation analysis of respiratory signals by using parallel coordinate plots.

    Science.gov (United States)

    Saatci, Esra

    2018-01-01

    The understanding of the bonds and the relationships between the respiratory signals, i.e. the airflow, the mouth pressure, the relative temperature and the relative humidity during breathing may provide the improvement on the measurement methods of respiratory mechanics and sensor designs or the exploration of the several possible applications in the analysis of respiratory disorders. Therefore, the main objective of this study was to propose a new combination of methods in order to determine the relationship between respiratory signals as a multidimensional data. In order to reveal the coupling between the processes two very different methods were used: the well-known statistical correlation analysis (i.e. Pearson's correlation and cross-correlation coefficient) and parallel coordinate plots (PCPs). Curve bundling with the number intersections for the correlation analysis, Least Mean Square Time Delay Estimator (LMS-TDE) for the point delay detection and visual metrics for the recognition of the visual structures were proposed and utilized in PCP. The number of intersections was increased when the correlation coefficient changed from high positive to high negative correlation between the respiratory signals, especially if whole breath was processed. LMS-TDE coefficients plotted in PCP indicated well-matched point delay results to the findings in the correlation analysis. Visual inspection of PCB by visual metrics showed range, dispersions, entropy comparisons and linear and sinusoidal-like relationships between the respiratory signals. It is demonstrated that the basic correlation analysis together with the parallel coordinate plots perceptually motivates the visual metrics in the display and thus can be considered as an aid to the user analysis by providing meaningful views of the data. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Correlation between leptin receptor gene polymorphism and type 2 diabetes in Chinese population: a meta-analysis

    Directory of Open Access Journals (Sweden)

    Miao HE

    2015-11-01

    Full Text Available Objective To evaluate the correlation between leptin receptor gene (LEPR polymorphism and type 2 diabetes (T2DM in Chinese population. Methods The literature concerning the correlation between LEPR polymorphism and T2DM in Chinese population were searched from Chinese databases (CNKI, VIP, WanFang, CBM with "leptin receptor gene" and "type 2 diabetes" as keywords, and from English databases (PubMed, Web of Knowledge, EBSCO with "leptin receptor gene", "LEPR", "OBR", "OB-R", "type 2 diabetes" and "T2DM" as keywords. The relevant articles were searched up to September 20, 2014. Then, meta-analysis was performed using RevMan 5.1 and Stata 11.0 software. The Newcastle-Ottawa Scale was applied to assess methodological quality of included articles from 3 aspects, namely, selection of participants, comparability and outcome assessment. Results Seventeen case-control studies involving 12 533 cases of T2DM and 3348 controls were included in Meta-analysis. A significant correlation was found between rs1137100 polymorphism in LEPR gene and T2DM (for recessive genetic model: OR=0.67, 95%CI 0.52-0.88, P=0.00; for allele contrast genetic model: OR=1.46, 95%CI 1.15-1.85, P=0.00. A strong correlation was also found between rs1137101 polymorphism and T2DM (for additive genetic model: OR=1.54, 95%CI 1.20-1.98, P=0.00; for allele contrast genetic model: OR=1.15, 95%CI 1.01-1.30, P=0.00. In addition, rs1805096 polymorphism was closely correlated with T2DM (for dominant genetic model: OR=1.32, 95%CI 1.07-1.62, P=0.00; for recessive genetic model: OR=1.30, 95%CI 1.09-1.54, P=0.00; for allele contrast genetic model: OR=0.67, 95%CI 0.59-0.75, P=0.00. Conclusions There is a significant correlation between rs1137100, rs1805096 of LEPR gene and T2DM in Chinese population under allele contrast genetic model as well as in recessive genetic model. Rs1137101 of LEPR gene is closely correlated with T2DM in Chinese population under additive genetic model. For dominant

  19. Structural design and analysis of test mass module for DECIGO Pathfinder

    International Nuclear Information System (INIS)

    Wakabayashi, Y; Ejiri, Y; Suzuki, R; Sugamoto, A; Obuchi, Y; Okada, N; Torii, Y; Ueda, A; Kawamura, S; Araya, A; Ando, M; Sato, S

    2010-01-01

    Deci-hertz Interferometer Gravitational-Wave Observatory: DECIGO is a project aimed at future detection of deci-hertz gravitational waves in space. DECIGO Pathfinder: DPF is a precursor mission to test the key technologies with one spacecraft. Our work in this article was to examine the strength of the DPF test mass module to ensure that it is sufficiently robust for launch with a launch vehicle. We designed the test mass module, and examined the structural strength of this model by structural analysis, Quasi-static acceleration analysis and Modal analysis using FEA (Finite Element Analysis). We found that the results of each analysis fulfilled all requirements. We are confident that the DPF test mass module will withstand Quasi-static acceleration or coupling with vibration of launch vehicle during launch, if the design matches the current design. For more detail, further analysis including Response analysis and Thermal analysis are recommended. In addition, it will be necessary to lighten the model in the next step.

  20. Improvement of auditing technology of safety analysis through thermal-hydraulic separate effect tests

    Energy Technology Data Exchange (ETDEWEB)

    No, Hee Cheon; Park, Hyun Sik; Kim, Hyougn Tae; Moon, Young Min; Choi, Sung Won; Heo, Sun [Korea Advanced Institute Science and Technology, Taejon (Korea, Republic of)

    1999-04-15

    The loss-of-RHR accident during midloop operation has been important as results of the probabilistic safety analysis. The condensation models In RELAP5/MOD3 are not proper to analyze the midloop operation. To audit and improve the model in RELAP5/MOD3.2, several items of separate effect tests have been performed. The 29 sets of reflux condensation data is obtained and the correlation is developed with these heat transfer coefficient's data. In the experiment of the direct contact condensation in hot leg, the apparatus setting is finished and a few experimental data is obtained. Non-iterative model is used to predict the model in RELAP5/MOD3.2 with the results of reflux condensation and evaluates better than the present model. The results of the direct contact condensation in a hot leg represent to be similar with the present model. The study of the CCF and liquid entrainment in a surge line and pressurizer is selected as the third separate experiment and is on performance.

  1. BNL NONLINEAR PRE TEST SEISMIC ANALYSIS FOR THE NUPEC ULTIMATE STRENGTH PIPING TEST PROGRAM

    International Nuclear Information System (INIS)

    DEGRASSI, G.; HOFMAYER, C.; MURPHY, C.; SUZUKI, K.; NAMITA, Y.

    2003-01-01

    The Nuclear Power Engineering Corporation (NUPEC) of Japan has been conducting a multi-year research program to investigate the behavior of nuclear power plant piping systems under large seismic loads. The objectives of the program are: to develop a better understanding of the elasto-plastic response and ultimate strength of nuclear piping; to ascertain the seismic safety margin of current piping design codes; and to assess new piping code allowable stress rules. Under this program, NUPEC has performed a large-scale seismic proving test of a representative nuclear power plant piping system. In support of the proving test, a series of materials tests, static and dynamic piping component tests, and seismic tests of simplified piping systems have also been performed. As part of collaborative efforts between the United States and Japan on seismic issues, the US Nuclear Regulatory Commission (USNRC) and its contractor, the Brookhaven National Laboratory (BNL), are participating in this research program by performing pre-test and post-test analyses, and by evaluating the significance of the program results with regard to safety margins. This paper describes BNL's pre-test analysis to predict the elasto-plastic response for one of NUPEC's simplified piping system seismic tests. The capability to simulate the anticipated ratcheting response of the system was of particular interest. Analyses were performed using classical bilinear and multilinear kinematic hardening models as well as a nonlinear kinematic hardening model. Comparisons of analysis results for each plasticity model against test results for a static cycling elbow component test and for a simplified piping system seismic test are presented in the paper

  2. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  3. A Hidden Markov Model Representing the Spatial and Temporal Correlation of Multiple Wind Farms

    DEFF Research Database (Denmark)

    Fang, Jiakun; Su, Chi; Hu, Weihao

    2015-01-01

    To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps is ado....... The proposed statistical modeling framework is compatible with the sequential power system reliability analysis. A case study on optimal sizing and location of fast-response regulation sources is presented.......To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps...... is adopted to categorize the similar output patterns of several wind farms into joint states. Then the hidden Markov model (HMM) is then designed to describe the temporal correlations among these joint states. Unlike the conventional Markov chain model, the accumulated wind power is taken into consideration...

  4. 9 m side drop test of scale model

    International Nuclear Information System (INIS)

    Ku, Jeong-Hoe; Chung, Seong-Hwan; Lee, Ju-Chan; Seo, Ki-Seog

    1993-01-01

    A type B(U) shipping cask had been developed in KAERI for transporting PWR spent fuel. Since the cask is to transport spent PWR fuel, it must be designed to meet all of the structural requirements specified in domestic packaging regulations and IAEA safety series No.6. This paper describes the side drop testing of a one - third scale model cask. The crush and deformations of the shock absorbing covers directly control the deceleration experiences of the cask during the 9 m side drop impact. The shock absorbing covers greatly mitigated the inertia forces of the cask body due to the side drop impact. Compared with the side drop test and finite element analysis, it was verified that the 1/3 scale model cask maintain its structural integrity of the model cask under the side drop impact. The test and analysis results could be used as the basic data to evaluate the structural integrity of the real cask. (J.P.N.)

  5. Cross-correlation time-of-flight analysis of molecular beam scattering

    International Nuclear Information System (INIS)

    Nowikow, C.V.; Grice, R.

    1979-01-01

    The theory of the cross-correlation method of time-of-flight analysis is presented in a form which highlights its formal similarity to the conventional method. A time-of-flight system for the analysis of crossed molecular beam scattering is described, which is based on a minicomputer interface and can operate in both the cross-correlation and conventional modes. The interface maintains the synchronisation of chopper disc rotation and channel advance indefinitely in the cross-correlation method and can acquire data in phase with the beam modulation in both methods. The shutter function of the cross-correlation method is determined and the deconvolution analysis of the data is discussed. (author)

  6. DKIST enclosure modeling and verification during factory assembly and testing

    Science.gov (United States)

    Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka

    2014-08-01

    The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.

  7. Engineering Properties and Correlation Analysis of Fiber Cementitious Materials

    Directory of Open Access Journals (Sweden)

    Wei-Ting Lin

    2014-11-01

    Full Text Available This study focuses on the effect of the amount of silica fume addition and volume fraction of steel fiber on the engineering properties of cementitious materials. Test variables include dosage of silica fume (5% and 10%, water/cement ratio (0.35 and 0.55 and steel fiber dosage (0.5%, 1.0% and 2.0%. The experimental results included: compressive strength, direct tensile strength, splitting tensile strength, surface abrasion and drop-weight test, which were collected to carry out the analysis of variance to realize the relevancy and significance between material parameters and those mechanical properties. Test results illustrate that the splitting tensile strength, direct tensile strength, strain capacity and ability of crack-arresting increase with increasing steel fiber and silica fume dosages, as well as the optimum mixture of the fiber cementitious materials is 5% replacement silica fume and 2% fiber dosage. In addition, the Pearson correlation coefficient was conducted to evaluate the influence of the material variables and corresponds to the experiment result.

  8. Multiscale Detrended Cross-Correlation Analysis of STOCK Markets

    Science.gov (United States)

    Yin, Yi; Shang, Pengjian

    2014-06-01

    In this paper, we employ the detrended cross-correlation analysis (DCCA) to investigate the cross-correlations between different stock markets. We report the results of cross-correlated behaviors in US, Chinese and European stock markets in period 1997-2012 by using DCCA method. The DCCA shows the cross-correlated behaviors of intra-regional and inter-regional stock markets in the short and long term which display the similarities and differences of cross-correlated behaviors simply and roughly and the persistence of cross-correlated behaviors of fluctuations. Then, because of the limitation and inapplicability of DCCA method, we propose multiscale detrended cross-correlation analysis (MSDCCA) method to avoid "a priori" selecting the ranges of scales over which two coefficients of the classical DCCA method are identified, and employ MSDCCA to reanalyze these cross-correlations to exhibit some important details such as the existence and position of minimum, maximum and bimodal distribution which are lost if the scale structure is described by two coefficients only and essential differences and similarities in the scale structures of cross-correlation of intra-regional and inter-regional markets. More statistical characteristics of cross-correlation obtained by MSDCCA method help us to understand how two different stock markets influence each other and to analyze the influence from thus two inter-regional markets on the cross-correlation in detail, thus we get a richer and more detailed knowledge of the complex evolutions of dynamics of the cross-correlations between stock markets. The application of MSDCCA is important to promote our understanding of the internal mechanisms and structures of financial markets and helps to forecast the stock indices based on our current results demonstrated the cross-correlations between stock indices. We also discuss the MSDCCA methods of secant rolling window with different sizes and, lastly, provide some relevant implications and

  9. Structure function analysis of long-range correlations in plasma turbulence

    International Nuclear Information System (INIS)

    Yu, C.X.; Gilmore, M.; Peebles, W.A.; Rhodes, T.L.

    2003-01-01

    Long-range correlations (temporal and spatial) have been predicted in a number of different turbulence models, both analytical and numerical. These long-range correlations are thought to significantly affect cross-field turbulent transport in magnetically confined plasmas. The Hurst exponent, H - one of a number of methods to identify the existence of long-range correlations in experimental data - can be used to quantify self-similarity scalings and correlations in the mesoscale temporal range. The Hurst exponent can be calculated by several different algorithms, each of which has particular advantages and disadvantages. One method for calculating H is via structure functions (SFs). The SF method is a robust technique for determining H with several inherent advantages that has not yet been widely used in plasma turbulence research. In this article, the SF method and its advantages are discussed in detail, using both simulated and measured fluctuation data from the DIII-D tokamak [J. L. Luxon and L. G. Davis, Fusion Technol. 8, 441 (1985)]. In addition, it is shown that SFs used in conjunction with rescaled range analysis (another method for calculating H) can be used to mitigate the effects of coherent modes in some cases

  10. Analysis of residuals from enzyme kinetic and protein folding experiments in the presence of correlated experimental noise.

    Science.gov (United States)

    Kuzmic, Petr; Lorenz, Thorsten; Reinstein, Jochen

    2009-12-01

    Experimental data from continuous enzyme assays or protein folding experiments often contain hundreds, or even thousands, of densely spaced data points. When the sampling interval is extremely short, the experimental data points might not be statistically independent. The resulting neighborhood correlation invalidates important theoretical assumptions of nonlinear regression analysis. As a consequence, certain goodness-of-fit criteria, such as the runs-of-signs test and the autocorrelation function, might indicate a systematic lack of fit even if the experiment does agree very well with the underlying theoretical model. A solution to this problem is to analyze only a subset of the residuals of fit, such that any excessive neighborhood correlation is eliminated. Substrate kinetics of the HIV protease and the unfolding kinetics of UMP/CMP kinase, a globular protein from Dictyostelium discoideum, serve as two illustrative examples. A suitable data-reduction algorithm has been incorporated into software DYNAFIT [P. Kuzmic, Anal. Biochem. 237 (1996) 260-273], freely available to all academic researchers from http://www.biokin.com.

  11. Large-scale evaluation of dynamically important residues in proteins predicted by the perturbation analysis of a coarse-grained elastic model

    Directory of Open Access Journals (Sweden)

    Tekpinar Mustafa

    2009-07-01

    Full Text Available Abstract Backgrounds It is increasingly recognized that protein functions often require intricate conformational dynamics, which involves a network of key amino acid residues that couple spatially separated functional sites. Tremendous efforts have been made to identify these key residues by experimental and computational means. Results We have performed a large-scale evaluation of the predictions of dynamically important residues by a variety of computational protocols including three based on the perturbation and correlation analysis of a coarse-grained elastic model. This study is performed for two lists of test cases with >500 pairs of protein structures. The dynamically important residues predicted by the perturbation and correlation analysis are found to be strongly or moderately conserved in >67% of test cases. They form a sparse network of residues which are clustered both in 3D space and along protein sequence. Their overall conservation is attributed to their dynamic role rather than ligand binding or high network connectivity. Conclusion By modeling how the protein structural fluctuations respond to residue-position-specific perturbations, our highly efficient perturbation and correlation analysis can be used to dissect the functional conformational changes in various proteins with a residue level of detail. The predictions of dynamically important residues serve as promising targets for mutational and functional studies.

  12. Correlation Between Screening Mammography Interpretive Performance on a Test Set and Performance in Clinical Practice.

    Science.gov (United States)

    Miglioretti, Diana L; Ichikawa, Laura; Smith, Robert A; Buist, Diana S M; Carney, Patricia A; Geller, Berta; Monsees, Barbara; Onega, Tracy; Rosenberg, Robert; Sickles, Edward A; Yankaskas, Bonnie C; Kerlikowske, Karla

    2017-10-01

    Evidence is inconsistent about whether radiologists' interpretive performance on a screening mammography test set reflects their performance in clinical practice. This study aimed to estimate the correlation between test set and clinical performance and determine if the correlation is influenced by cancer prevalence or lesion difficulty in the test set. This institutional review board-approved study randomized 83 radiologists from six Breast Cancer Surveillance Consortium registries to assess one of four test sets of 109 screening mammograms each; 48 radiologists completed a fifth test set of 110 mammograms 2 years later. Test sets differed in number of cancer cases and difficulty of lesion detection. Test set sensitivity and specificity were estimated using woman-level and breast-level recall with cancer status and expert opinion as gold standards. Clinical performance was estimated using women-level recall with cancer status as the gold standard. Spearman rank correlations between test set and clinical performance with 95% confidence intervals (CI) were estimated. For test sets with fewer cancers (N = 15) that were more difficult to detect, correlations were weak to moderate for sensitivity (woman level = 0.46, 95% CI = 0.16, 0.69; breast level = 0.35, 95% CI = 0.03, 0.61) and weak for specificity (0.24, 95% CI = 0.01, 0.45) relative to expert recall. Correlations for test sets with more cancers (N = 30) were close to 0 and not statistically significant. Correlations between screening performance on a test set and performance in clinical practice are not strong. Test set performance more accurately reflects performance in clinical practice if cancer prevalence is low and lesions are challenging to detect. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Patient self-report section of the ASES questionnaire: a Spanish validation study using classical test theory and the Rasch model.

    Science.gov (United States)

    Vrotsou, Kalliopi; Cuéllar, Ricardo; Silió, Félix; Rodriguez, Miguel Ángel; Garay, Daniel; Busto, Gorka; Trancho, Ziortza; Escobar, Antonio

    2016-10-18

    The aim of the current study was to validate the self-report section of the American Shoulder and Elbow Surgeons questionnaire (ASES-p) into Spanish. Shoulder pathology patients were recruited and followed up to 6 months post treatment. The ASES-p, Constant, SF-36 and Barthel scales were filled-in pre and post treatment. Reliability was tested with Cronbach's alpha, convergent validity with Spearman's correlations coefficients. Confirmatory factor analysis (CFA) and the Rasch model were implemented for assessing structural validity and unidimensionality of the scale. Models with and without the pain item were considered. Responsiveness to change was explored via standardised effect sizes. Results were acceptable for both tested models. Cronbach's alpha was 0.91, total scale correlations with Constant and physical SF-36 dimensions were >0.50. Factor loadings for CFA were >0.40. The Rasch model confirmed unidimensionality of the scale, even though item 10 "do usual sport" was suggested as non-informative. Finally, patients with improved post treatment shoulder function and those receiving surgery had higher standardised effect sizes. The adapted Spanish ASES-p version is a valid and reliable tool for shoulder evaluation and its unidimensionality is supported by the data.

  14. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Palley, M.A.

    1985-01-01

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queries of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.

  15. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  16. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  17. MARS CODE MANUAL VOLUME V: Models and Correlations

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Bae, Sung Won; Lee, Seung Wook; Yoon, Churl; Hwang, Moon Kyu; Kim, Kyung Doo; Jeong, Jae Jun

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This models and correlations manual provides a complete list of detailed information of the thermal-hydraulic models used in MARS, so that this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  18. A general correlation of MPPS penetration as a function of face velocity with the model 8140 using the certitest 8160

    Energy Technology Data Exchange (ETDEWEB)

    Lifshutz, N.; Pierce, M. [Hollingsworth & Vose Company, West Groton, MA (United States)

    1997-08-01

    The CertiTest 8160 is a Condensation Nucleus Counter (CNC) based filtration test stand which permits measurement of penetration as a function of particle size. The Model 8140 is also a CNC based filtration test stand which provides a single penetration measurement for a fixed particle distribution aerosol challenge. A study was carried out measuring DOP penetration on a broad range of flat filtration media at various face velocities to compare these two instruments. The tests done on the CertiTest 8160 incorporated a range of particle sizes which encompassed the most penetrating particle size (MPPS). In this paper we present a correlation between the MPPS penetration as measured by the CertiTest 8160 and the penetration values obtained on the Model 8140. We observed that at the lowest air face velocities of the study the Model 8140 tended to overpredict the MPPS penetration as measured by the CertiTest 8160. We also present a correlation of MPPS penetration with face velocity which may be of use for extrapolation purposes. 5 refs., 8 figs.

  19. Ares I Scale Model Acoustic Tests Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas D.

    2011-01-01

    The Ares I Scale Model Acoustic Test (ASMAT) was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116. The test article included a 5% scale Ares I vehicle model and tower mounted on the Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments located throughout the test article. There were four primary ASMAT instrument suites: ignition overpressure (IOP), lift-off acoustics (LOA), ground acoustics (GA), and spatial correlation (SC). Each instrumentation suite incorporated different sensor models which were selected based upon measurement requirements. These requirements included the type of measurement, exposure to the environment, instrumentation check-outs and data acquisition. The sensors were attached to the test article using different mounts and brackets dependent upon the location of the sensor. This presentation addresses the observed effect of the sensors and mounts on the acoustic and pressure measurements.

  20. Thermal Analysis of the Fastrac Chamber/Nozzle

    Science.gov (United States)

    Davis, Darrell

    2001-01-01

    This paper will describe the thermal analysis techniques used to predict temperatures in the film-cooled ablative rocket nozzle used on the Fastrac 60K rocket engine. A model was developed that predicts char and pyrolysis depths, liner thermal gradients, and temperatures of the bondline between the overwrap and liner. Correlation of the model was accomplished by thermal analog tests performed at Southern Research, and specially instrumented hot fire tests at the Marshall Space Flight Center. Infrared thermography was instrumental in defining nozzle hot wall surface temperatures. In-depth and outboard thermocouple data was used to correlate the kinetic decomposition routine used to predict char and pyrolysis depths. These depths were anchored with measured char and pyrolysis depths from cross-sectioned hot-fire nozzles. For the X-34 flight analysis, the model includes the ablative Thermal Protection System (TPS) material that protects the overwrap from the recirculating plume. Results from model correlation, hot-fire testing, and flight predictions will be discussed.