WorldWideScience

Sample records for dimensional analysis method

  1. Nonlinear Dimensionality Reduction Methods in Climate Data Analysis

    OpenAIRE

    Ross, Ian

    2009-01-01

    Linear dimensionality reduction techniques, notably principal component analysis, are widely used in climate data analysis as a means to aid in the interpretation of datasets of high dimensionality. These linear methods may not be appropriate for the analysis of data arising from nonlinear processes occurring in the climate system. Numerous techniques for nonlinear dimensionality reduction have been developed recently that may provide a potentially useful tool for the identification of low-di...

  2. On two flexible methods of 2-dimensional regression analysis

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2012-01-01

    Roč. 18, č. 4 (2012), s. 154-164 ISSN 1803-9782 Grant - others:GA ČR(CZ) GAP209/10/2045 Institutional support: RVO:67985556 Keywords : regression analysis * Gordon surface * prediction error * projection pursuit Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on two flexible methods of 2-dimensional regression analysis.pdf

  3. Dimensional Analysis

    CERN Document Server

    Tan, Qingming

    2011-01-01

    Dimensional analysis is an essential scientific method and a powerful tool for solving problems in physics and engineering. This book starts by introducing the Pi Theorem, which is the theoretical foundation of dimensional analysis. It also provides ample and detailed examples of how dimensional analysis is applied to solving problems in various branches of mechanics. The book covers the extensive findings on explosion mechanics and impact dynamics contributed by the author's research group over the past forty years at the Chinese Academy of Sciences. The book is intended for advanced undergra

  4. An online incremental orthogonal component analysis method for dimensionality reduction.

    Science.gov (United States)

    Zhu, Tao; Xu, Ye; Shen, Furao; Zhao, Jinxi

    2017-01-01

    In this paper, we introduce a fast linear dimensionality reduction method named incremental orthogonal component analysis (IOCA). IOCA is designed to automatically extract desired orthogonal components (OCs) in an online environment. The OCs and the low-dimensional representations of original data are obtained with only one pass through the entire dataset. Without solving matrix eigenproblem or matrix inversion problem, IOCA learns incrementally from continuous data stream with low computational cost. By proposing an adaptive threshold policy, IOCA is able to automatically determine the dimension of feature subspace. Meanwhile, the quality of the learned OCs is guaranteed. The analysis and experiments demonstrate that IOCA is simple, but efficient and effective. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Dimensional analysis and qualitative methods in problem solving: II

    International Nuclear Information System (INIS)

    Pescetti, D

    2009-01-01

    We show that the underlying mathematical structure of dimensional analysis (DA), in the qualitative methods in problem-solving context, is the algebra of the affine spaces. In particular, we show that the qualitative problem-solving procedure based on the parallel decomposition of a problem into simple special cases yields the new original mathematical concepts of special points and special representations of affine spaces. A qualitative problem-solving algorithm piloted by the mathematics of DA is illustrated by a set of examples.

  6. PWR core safety analysis with 3-dimensional methods

    International Nuclear Information System (INIS)

    Gensler, A.; Kühnel, K.; Kuch, S.

    2015-01-01

    Highlights: • An overview of AREVA’s safety analysis codes their coupling is provided. • The validation base and licensing applications of these codes are summarized. • Coupled codes and methods provide improved margins and non-conservative results. • Examples for REA and inadvertent opening of the pressurizer safety valve are given. - Abstract: The main focus of safety analysis is to demonstrate the required safety level of the reactor core. Because of the demanding requirements, the quality of the safety analysis strongly affects the confidence in the operational safety of a reactor. To ensure the highest quality, it is essential that the methodology consists of appropriate analysis tools, an extensive validation base, and last but not least highly educated engineers applying the methodology. The sophisticated 3-dimensional core models applied by AREVA ensure that all physical effects relevant for safety are treated and the results are reliable and conservative. Presently AREVA employs SCIENCE, CASMO/NEMO and CASCADE-3D for pressurized water reactors. These codes are currently being consolidated into the next generation 3D code system ARCADIA®. AREVA continuously extends the validation base, including measurement campaigns in test facilities and comparisons of the predictions of steady state and transient measured data gathered from plants during many years of operation. Thus, the core models provide reliable and comprehensive results for a wide range of applications. For the application of these powerful tools, AREVA is taking benefit of its interdisciplinary know-how and international teamwork. Experienced engineers of different technical backgrounds are working together to ensure an appropriate interpretation of the calculation results, uncertainty analysis, along with continuously maintaining and enhancing the quality of the analysis methodologies. In this paper, an overview of AREVA’s broad application experience as well as the broad validation

  7. Dimensional Analysis and Qualitative Methods in Problem Solving

    Science.gov (United States)

    Pescetti, D.

    2008-01-01

    The primary application of dimensional analysis (DA) is in problem solving. Typically, the problem description indicates that a physical quantity Y(the unknown) is a function f of other physical quantities A[subscript 1], ..., A[subscript n] (the data). We propose a qualitative problem-solving procedure which consists of a parallel decomposition…

  8. Dimensionality Reduction Methods: Comparative Analysis of methods PCA, PPCA and KPCA

    Directory of Open Access Journals (Sweden)

    Jorge Arroyo-Hernández

    2016-01-01

    Full Text Available The dimensionality reduction methods are algorithms mapping the set of data in subspaces derived from the original space, of fewer dimensions, that allow a description of the data at a lower cost. Due to their importance, they are widely used in processes associated with learning machine. This article presents a comparative analysis of PCA, PPCA and KPCA dimensionality reduction methods. A reconstruction experiment of worm-shape data was performed through structures of landmarks located in the body contour, with methods having different number of main components. The results showed that all methods can be seen as alternative processes. Nevertheless, thanks to the potential for analysis in the features space and the method for calculation of its preimage presented, KPCA offers a better method for recognition process and pattern extraction

  9. Methods, apparatuses, and computer-readable media for projectional morphological analysis of N-dimensional signals

    Science.gov (United States)

    Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich

    2016-05-17

    Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.

  10. Dimensional Analysis

    Indian Academy of Sciences (India)

    Dimensional analysis is a useful tool which finds important applications in physics and engineering. It is most effective when there exist a maximal number of dimensionless quantities constructed out of the relevant physical variables. Though a complete theory of dimen- sional analysis was developed way back in 1914 in a.

  11. Method of multi-dimensional moment analysis for the characterization of signal peaks

    Science.gov (United States)

    Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A

    2012-10-23

    A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.

  12. Dimensional Analysis

    Indian Academy of Sciences (India)

    to understand and quite straightforward to use. Dimensional analysis is a topic which every student of 'science encounters in elementary physics courses. The basics of this topic are taught and learnt quite hurriedly (and forgotten fairly quickly thereafter!) It does not generally receive the attention and the respect it deserves ...

  13. Moderator feedback effects in two-dimensional nodal methods for pressurized water reactor analysis

    International Nuclear Information System (INIS)

    Downar, T.J.

    1987-01-01

    A method was developed for incorporating moderator feedback effects in two-dimensional nodal codes used for pressurized water reactor (PWR) neutronic analysis. Equations for the assembly average quality and density are developed in terms of the assembly power calculated in two dimensions. The method is validated with a Westinghouse PWR using the Electric Power Research Institute code SIMULATE-E. Results show a several percent improvement is achieved in the two-dimensional power distribution prediction compared to methods without moderator feedback

  14. Dimensionality reduction methods:

    OpenAIRE

    Amenta, Pietro; D'Ambra, Luigi; Gallo, Michele

    2005-01-01

    In case one or more sets of variables are available, the use of dimensional reduction methods could be necessary. In this contest, after a review on the link between the Shrinkage Regression Methods and Dimensional Reduction Methods, authors provide a different multivariate extension of the Garthwaite's PLS approach (1994) where a simple linear regression coefficients framework could be given for several dimensional reduction methods.

  15. Dimensional analysis and self-similarity methods for engineers and scientists

    CERN Document Server

    Zohuri, Bahman

    2015-01-01

    This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of  dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme

  16. Impact response analysis of cask for spent fuel by dimensional analysis and mode superposition method

    International Nuclear Information System (INIS)

    Kim, Y. J.; Kim, W. T.; Lee, Y. S.

    2006-01-01

    Full text: Full text: Due to the potentiality of accidents, the transportation safety of radioactive material has become extremely important in these days. The most important means of accomplishing the safety in transportation for radioactive material is the integrity of cask. The cask for spent fuel consists of a cask body and two impact limiters generally. The impact limiters are attached at the upper and the lower of the cask body. The cask comprises general requirements and test requirements for normal transport conditions and hypothetical accident conditions in accordance with IAEA regulations. Among the test requirements for hypothetical accident conditions, the 9 m drop test of dropping the cask from 9 m height to unyielding surface to get maximum damage becomes very important requirement because it can affect the structural soundness of the cask. So far the impact response analysis for 9 m drop test has been obtained by finite element method with complex computational procedure. In this study, the empirical equations of the impact forces for 9 m drop test are formulated by dimensional analysis. And then using the empirical equations the characteristics of material used for impact limiters are analysed. Also the dynamic impact response of the cask body is analysed using the mode superposition method and the analysis method is proposed. The results are also validated by comparing with previous experimental results and finite element analysis results. The present method is simpler than finite element method and can be used to predict the impact response of the cask

  17. Three-Dimensional CST Parameterization Method Applied in Aircraft Aeroelastic Analysis

    Directory of Open Access Journals (Sweden)

    Hua Su

    2017-01-01

    Full Text Available Class/shape transformation (CST method has advantages of adjustable design variables and powerful parametric geometric shape design ability and has been widely used in aerodynamic design and optimization processes. Three-dimensional CST is an extension for complex aircraft and can generate diverse three-dimensional aircraft and the corresponding mesh automatically and quickly. This paper proposes a parametric structural modeling method based on gridding feature extraction from the aerodynamic mesh generated by the three-dimensional CST method. This novel method can create parametric structural model for fuselage and wing and keep the coordination between the aerodynamic mesh and the structural mesh. Based on the generated aerodynamic model and structural model, an automatic process for aeroelastic modeling and solving is presented with the panel method for aerodynamic solver and NASTRAN for structural solver. A reusable launch vehicle (RLV is used to illustrate the process for aeroelastic modeling and solving. The result shows that this method can generate aeroelastic model for diverse complex three-dimensional aircraft automatically and reduce the difficulty of aeroelastic analysis dramatically. It provides an effective approach to make use of the aeroelastic analysis at the conceptual design phase for modern aircraft.

  18. Three-dimensional analysis of eddy current with the finite element method

    International Nuclear Information System (INIS)

    Takano, Ichiro; Suzuki, Yasuo

    1977-05-01

    The finite element method is applied to three-dimensional analysis of eddy current induced in a large Tokamak device (JT-60). Two techniques to study the eddy current are presented: those of ordinary vector potential and modified vector potential. The latter is originally developed for decreasing dimension of the global matrix. Theoretical treatment of these two is given. The skin effect for alternate current flowing in the circular loop of rectangular cross section is examined as an example of the modified vector potential technique, and the result is compared with analytical one. This technique is useful in analysis of the eddy current problem. (auth.)

  19. Two- and three-dimensional shape fabric analysis by the intercept method in grey levels

    Science.gov (United States)

    Launeau, Patrick; Archanjo, Carlos J.; Picard, David; Arbaret, Laurent; Robin, Pierre-Yves

    2010-09-01

    The count intercept is a robust method for the numerical analysis of fabrics Launeau and Robin (1996). It counts the number of intersections between a set of parallel scan lines and a mineral phase, which must be identified on a digital image. However, the method is only sensitive to boundaries and therefore supposes the user has some knowledge about their significance. The aim of this paper is to show that a proper grey level detection of boundaries along scan lines is sufficient to calculate the two-dimensional anisotropy of grain or crystal distributions without any particular image processing. Populations of grains and crystals usually display elliptical anisotropies in rocks. When confirmed by the intercept analysis, a combination of a minimum of 3 mean length intercept roses, taken on 3 more or less perpendicular sections, allows the calculation of 3-dimensional ellipsoids and the determination of their standard deviation with direction and intensity in 3 dimensions as well. The feasibility of this quick method is attested by numerous examples on theoretical objects deformed by active and passive deformation, on BSE images of synthetic magma flow, on drawing or direct analysis of thin section pictures of sandstones and on digital images of granites directly taken and measured in the field.

  20. Registration and three-dimensional reconstruction of autoradiographic images by the disparity analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Weizhao; Ginsberg, M. (Univ. of Miami, FL (United States). Cerebral Vascular Disease Research Center); Young, T.Y. (Univ. of Miami, Coral Gables, FL (United States). Dept. of Electrical and Computer Engineering)

    1993-12-01

    Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented.

  1. A Dimensional Analysis Method for Improved Load-Unload Response Ratio

    Science.gov (United States)

    Liu, Yue; Yin, Xiang-chu

    2018-02-01

    The load-unload response ratio (LURR) method is proposed to measure the damage extent of source media and the criticality of earthquake. Before the occurrence of a large earthquake, anomalous increase in the time series of LURR within the certain temporal and spatial windows has often been observed. In this paper, a dimensional analysis technique is devised to evaluate quantitatively the magnitude and time of the ensuing large earthquake within the anomalous areas derived from the LURR method. Based on the π-theorem, two dimensionless quantities associated with the earthquake time and magnitude are derived from five parameters (i.e. the seismic energy ( E S), the average seismic energy ( E W), the maximum value of LURR's seismogenic integral ( I PP), the thickness of seismogenic zone ( h), the time interval from I PP to earthquake ( T 2), and the shear strain rate (\\dot{γ })). The statistical relationships between the earthquakes and the two dimensionless quantities are derived by testing the seismic data of the 50 events of M4.5 8.1 occurred in China since 1976. In earthquake prediction, the LURR method is used to detect the areas with anomalous high LURR values, and then our dimensional analysis technique is applied to assess the optimal critical region, magnitude, and time of the ensuing event, when its seismogenic integral is peaked ( I PP). As study examples, we applied this approach to study four large events, namely the 2012 M S5.3 Hami, 2015 M S5.8 Alashan, 2015 M S8.1 Nepal earthquakes, and the 2013 Songyuan earthquake swam. Results show that the predicted location, time, and magnitude correlate well with the actual events. This provides evidence that the dimensional analysis technique may be a useful tool to augment current predictive power of the traditional LURR approach.

  2. Development of pin-by-pin core analysis method using three-dimensional direct response matrix

    International Nuclear Information System (INIS)

    Ishii, Kazuya; Hino, Tetsushi; Mitsuyasu, Takeshi; Aoyama, Motoo

    2009-01-01

    A three-dimensional direct response matrix method using a Monte Carlo calculation has been developed. The direct response matrix is formalized by four subresponse matrices in order to respond to a core eigenvalue k and thus can be recomposed at each outer iteration in core analysis. The subresponse matrices can be evaluated by ordinary single fuel assembly calculations with the Monte Carlo method in three dimensions. Since these subresponse matrices are calculated for the actual geometry of the fuel assembly, the effects of intra- and inter-assembly heterogeneities can be reflected on global partial neutron current balance calculations in core analysis. To verify this method, calculations for heterogeneous systems were performed. The results obtained using this method agreed well with those obtained using direct calculations with a Monte Carlo method. This means that this method accurately reflects the effects of intra- and inter-assembly heterogeneities and can be used for core analysis. A core analysis method, in which neutronic calculations using this direct response matrix method are coupled with thermal-hydraulic calculations, has also been developed. As it requires neither diffusion approximation nor a homogenization process of lattice constants, a precise representation of the effects of neutronic heterogeneities is possible. Moreover, the fuel rod power distribution can be directly evaluated, which enables accurate evaluations of core thermal margins. A method of reconstructing the response matrices according to the condition of each node in the core has been developed. The test revealed that the neutron multiplication factors and the fuel rod neutron production rates could be reproduced by interpolating the elements of the response matrix. A coupled analysis of neutronic calculations using the direct response matrix method and thermal-hydraulic calculations for an ABWR quarter core was performed, and it was found that the thermal power and coolant

  3. Two Dimensional Complex Wavenumber Dispersion Analysis using B-Spline Finite Elements Method

    Directory of Open Access Journals (Sweden)

    Y. Mirbagheri

    2016-01-01

    Full Text Available  Grid dispersion is one of the criteria of validating the finite element method (FEM in simulating acoustic or elastic wave propagation. The difficulty usually arisen when using this method for simulation of wave propagation problems, roots in the discontinuous field which causes the magnitude and the direction of the wave speed vector, to vary from one element to the adjacent one. To solve this problem and improve the response accuracy, two approaches are usually suggested: changing the integration method and changing shape functions. The Finite Element iso-geometric analysis (IGA is used in this research. In the IGA, the B-spline or non-uniform rational B-spline (NURBS functions are used which improve the response accuracy, especially in one-dimensional structural dynamics problems. At the boundary of two adjacent elements, the degree of continuity of the shape functions used in IGA can be higher than zero. In this research, for the first time, a two dimensional grid dispersion analysis has been used for wave propagation in plane strain problems using B-spline FEM is presented. Results indicate that, for the same degree of freedom, the grid dispersion of B-spline FEM is about half of the grid dispersion of the classic FEM.

  4. Three-dimensional method for integrated transient analysis of reactor-piping systems

    International Nuclear Information System (INIS)

    Wang, C.Y.

    1981-01-01

    A three-dimensional method for integrated hydrodynamic, structural, and thermal analyses of reactor-piping systems is presented. The hydrodynamics are analyzed in a reference frame fixed to the piping and are treated with a two-dimensional Eulerian finite-difference technique. The structural responses are calculated with a three-dimensional co-rotational finite-element methodology. Interaction between fluid and structure is accounted for by iteratively enforcing the interface boundary conditions

  5. A three-dimensional viscous/potential flow interaction analysis method for multi-element wings

    Science.gov (United States)

    Dvorak, F. A.; Woodward, F. A.; Maskew, B.

    1977-01-01

    An analysis method and computer program were developed for the calculation of the viscosity dependent aerodynamic characteristics of multi-element, finite wings in incompressible flow. A fully-three dimensional potential flow program is used to determine the inviscid pressure distribution about the configuration. The potential flow program uses surface source and vortex singularities to represent the inviscid flow. The method is capable of analysing configurations having at most one slat, a main element, and two slotted flaps. Configurations are limited to full span slats or flaps. The configuration wake is allowed to relax as a force free wake, although roll up is not allowed at this time. Once the inviscid pressure distribution is calculated, a series of boundary layer computations are made along streamwise strips.

  6. Material Flow Analysis in Indentation by Two-Dimensional Digital Image Correlation and Finite Elements Method.

    Science.gov (United States)

    Bermudo, Carolina; Sevilla, Lorenzo; Castillo López, Germán

    2017-06-21

    The present work shows the material flow analysis in indentation by the numerical two dimensional Finite Elements (FEM) method and the experimental two-dimensional Digital Image Correlation (DIC) method. To achieve deep indentation without cracking, a ductile material, 99% tin, is used. The results obtained from the DIC technique depend predominantly on the pattern conferred to the samples. Due to the absence of a natural pattern, black and white spray painting is used for greater contrast. The stress-strain curve of the material has been obtained and introduced in the Finite Element simulation code used, DEFORM™, allowing for accurate simulations. Two different 2D models have been used: a plain strain model to obtain the load curve and a plain stress model to evaluate the strain maps on the workpiece surface. The indentation displacement load curve has been compared between the FEM and the experimental results, showing a good correlation. Additionally, the strain maps obtained from the material surface with FEM and DIC are compared in order to validate the numerical model. The Von Mises strain results between both of them present a 10-20% difference. The results show that FEM is a good tool for simulating indentation processes, allowing for the evaluation of the maximum forces and deformations involved in the forming process. Additionally, the non-contact DIC technique shows its potential by measuring the superficial strain maps, validating the FEM results.

  7. Hybrid dimensionality reduction method based on support vector machine and independent component analysis.

    Science.gov (United States)

    Moon, Sangwoo; Qi, Hairong

    2012-05-01

    This paper presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) and data independence (unsupervised criterion). Classification accuracy is used as a metric to evaluate the performance of the method. By minimizing the structural risk, projection originated from the decision boundaries directly improves the classification performance from a supervised perspective. From an unsupervised perspective, projection can also be obtained based on maximum independence among features (or attributes) in data to indirectly achieve better classification accuracy over more intrinsic representation of the data. Orthogonality interrelates the two sets of projections such that minimum redundancy exists between the projections, leading to more effective dimensionality reduction. Experimental results show that the proposed hybrid dimensionality reduction method that satisfies both criteria simultaneously provides higher classification performance, especially for noisy data sets, in relatively lower dimensional space than various existing methods.

  8. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  9. Two-dimensional transient thermal analysis of a fuel rod by finite volume method

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Rhayanne Yalle Negreiros; Silva, Mário Augusto Bezerra da; Lira, Carlos Alberto de Oliveira, E-mail: ryncosta@gmail.com, E-mail: mabs500@gmail.com, E-mail: cabol@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear

    2017-07-01

    One of the greatest concerns when studying a nuclear reactor is the warranty of safe temperature limits all over the system at all time. The preservation of core structure along with the constraint of radioactive material into a controlled system are the main focus during the operation of a reactor. The purpose of this paper is to present the temperature distribution for a nominal channel of the AP1000 reactor developed by Westinghouse Co. during steady-state and transient operations. In the analysis, the system was subjected to normal operation conditions and then to blockages of the coolant flow. The time necessary to achieve a new safe stationary stage (when it was possible) was presented. The methodology applied in this analysis was based on a two-dimensional survey accomplished by the application of Finite Volume Method (FVM). A steady solution is obtained and compared with an analytical analysis that disregard axial heat transport to determine its relevance. The results show the importance of axial heat transport consideration in this type of study. A transient analysis shows the behavior of the system when submitted to coolant blockage at channel's entrance. Three blockages were simulated (10%, 20% and 30%) and the results show that, for a nominal channel, the system can still be considerate safe (there's no bubble formation until that point). (author)

  10. Analysis of one-dimensional nonequilibrium two-phase flow using control volume method

    International Nuclear Information System (INIS)

    Minato, Akihiko; Naitoh, Masanori

    1987-01-01

    A one-dimensional numerical analysis model was developed for prediction of rapid flow transient behavior involving boiling. This model was based on six conservation equations of time averaged parameters of gas and liquid behavior. These equations were solved by using a control volume method with an explicit time integration. This model did not use staggered mesh scheme, which had been commonly used in two-phase flow analysis. Because void fraction and velocity of each phase were defined at the same location in the present model, effects of void fraction on phase velocity calculation were treated directly without interpolation. Though non-staggered mesh scheme was liable to cause numerical instability with zigzag pressure field, stability was achieved by employing the Godunov method. In order to verify the present analytical model, Edwards' pipe blow down and Zaloudek's initially subcooled critical two-phase flow experiments were analyzed. Stable solutions were obtained for rarefaction wave propagation with boiling and transient two-phase flow behavior in a broken pipe by using this model. (author)

  11. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Directory of Open Access Journals (Sweden)

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  12. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Science.gov (United States)

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  13. The analysis of carbohydrates in milk powder by a new "heart-cutting" two-dimensional liquid chromatography method.

    Science.gov (United States)

    Ma, Jing; Hou, Xiaofang; Zhang, Bing; Wang, Yunan; He, Langchong

    2014-03-01

    In this study, a new"heart-cutting" two-dimensional liquid chromatography method for the simultaneous determination of carbohydrate contents in milk powder was presented. In this two dimensional liquid chromatography system, a Venusil XBP-C4 analysis column was used in the first dimension ((1)D) as a pre-separation column, a ZORBAX carbohydrates analysis column was used in the second dimension ((2)D) as a final-analysis column. The whole process was completed in less than 35min without a particular sample preparation procedure. The capability of the new two dimensional HPLC method was demonstrated in the determination of carbohydrates in various brands of milk powder samples. A conventional one dimensional chromatography method was also proposed. The two proposed methods were both validated in terms of linearity, limits of detection, accuracy and precision. The comparison between the results obtained with the two methods showed that the new and completely automated two dimensional liquid chromatography method is more suitable for milk powder sample because of its online cleanup effect involved. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  14. Application of equivalent elastic methods in three-dimensional finite element structural analysis

    International Nuclear Information System (INIS)

    Jones, D.P.; Gordon, J.L.; Hutula, D.N.; Holliday, J.E.; Jandrasits, W.G.

    1998-02-01

    This paper describes use of equivalent solid (EQS) modeling to obtain efficient solutions to perforated material problems using three-dimensional finite element analysis (3D-FEA) programs. It is shown that the accuracy of EQS methods in 3D-FEA depends on providing sufficient equivalent elastic properties to allow the EQS material to respond according to the elastic symmetry of the pattern. Peak stresses and ligament stresses are calculated from the EQS stresses by an appropriate 3D-FEA submodel approach. The method is demonstrated on the problem of a transversely pressurized simply supported plate with a central divider lane separating two perforated regions with circular penetrations arranged in a square pattern. A 3D-FEA solution for a model that incorporates each penetration explicitly is used for comparison with results from an EQS solution for the plate. Results for deflection and stresses from the EQS solution are within 3% of results from the explicit 3D-FE model. A solution to the sample problem is also provided using the procedures in the ASME B and PV Code. The ASME B and PV Code formulas for plate deflection were shown to overestimate the stiffening effects of the divider lane and the outer stiffening ring

  15. Clinical-oriented Three-dimensional Gait Analysis Method for Evaluating Gait Disorder.

    Science.gov (United States)

    Mukaino, Masahiko; Ohtsuka, Kei; Tanikawa, Hiroki; Matsuda, Fumihiro; Yamada, Junya; Itoh, Norihide; Saitoh, Eiichi

    2018-03-04

    Three-dimensional gait analysis (3DGA) is shown to be a useful clinical tool for the evaluation of gait abnormality due to movement disorders. However, the use of 3DGA in actual clinics remains uncommon. Possible reasons could include the time-consuming measurement process and difficulties in understanding measurement results, which are often presented using a large number of graphs. Here we present a clinician-friendly 3DGA method developed to facilitate the clinical use of 3DGA. This method consists of simplified preparation and measurement processes that can be performed in a short time period in clinical settings and intuitive results presentation to facilitate clinicians' understanding of results. The quick, simplified measurement procedure is achieved by the use of minimum markers and measurement of patients on a treadmill. To facilitate clinician understanding, results are presented in figures based on the clinicians' perspective. A Lissajous overview picture (LOP), which shows the trajectories of all markers from a holistic viewpoint, is used to facilitate intuitive understanding of gait patterns. Abnormal gait pattern indices, which are based on clinicians' perspectives in gait evaluation and standardized using the data of healthy subjects, are used to evaluate the extent of typical abnormal gait patterns in stroke patients. A graph depicting the analysis of the toe clearance strategy, which depicts how patients rely on normal and compensatory strategies to achieve toe clearance, is also presented. These methods could facilitate implementation of 3DGA in clinical settings and further encourage development of measurement strategies from the clinician's point of view.

  16. ANALYSIS OF IMPACT ON COMPOSITE STRUCTURES WITH THE METHOD OF DIMENSIONALITY REDUCTION

    Directory of Open Access Journals (Sweden)

    Valentin L. Popov

    2015-04-01

    Full Text Available In the present paper, we discuss the impact of rigid profiles on continua with non-local criteria for plastic yield. For the important case of media whose hardness is inversely proportional to the indentation radius, we suggest a rigorous treatment based on the method of dimensionality reduction (MDR and study the example of indentation by a conical profile.

  17. Dimensionality reduction methods for molecular simulations

    OpenAIRE

    Doerr, Stefan; Ariz-Extreme, Igor; Harvey, Matthew J.; De Fabritiis, Gianni

    2017-01-01

    Molecular simulations produce very high-dimensional data-sets with millions of data points. As analysis methods are often unable to cope with so many dimensions, it is common to use dimensionality reduction and clustering methods to reach a reduced representation of the data. Yet these methods often fail to capture the most important features necessary for the construction of a Markov model. Here we demonstrate the results of various dimensionality reduction methods on two simulation data-set...

  18. One-dimensional finite-elements method for the analysis of whispering gallery microresonators.

    Science.gov (United States)

    Bagheri-Korani, Ebrahim; Mohammad-Taheri, Mahmoud; Shahabadi, Mahmoud

    2014-07-01

    By taking advantage of axial symmetry of the planar whispering gallery microresonators, the three-dimensional (3D) problem of the resonator is reduced to a two-dimensional (2D) one; thus, only the cross section of the resonator needs to be analyzed. Then, the proposed formulation, which works based on a combination of the finite-elements method (FEM) and Fourier expansion of the fields, can be applied to the 2D problem. First, the axial field variation is expressed in terms of a Fourier series. Then, a FEM method is applied to the radial field variation. This formulation yields an eigenvalue problem with sparse matrices and can be solved using a well-known numerical technique. This method takes into account both the radiation loss and the dielectric loss; hence, it works efficiently either for high number or low number modes. Efficiency of the method was investigated by comparison of the results with those of commercial software.

  19. Study on the application of dimensionality reduction method on reliability and reliability sensitivity analysis of random vibration systems

    Directory of Open Access Journals (Sweden)

    Haochuan Li

    2015-10-01

    Full Text Available The application of dimensionality reduction method on the reliability and reliability sensitivity analysis of vibration systems was examined. The dimensionality of a vibration reliability problem was reduced to only two independent dimensions by the means of polar transformation. As the safe and failure classes of samples are clearly distinguishable and occupy a standard position in a two-dimensional plot in the case that the important direction exists, the relevant samples are selected visually. The reliability and reliability sensitivity problems were solved using the position of the relevant samples. Before the reliability and reliability sensitivity estimation, an essential visualization analysis is conducted to examine the capacity of the method in terms of the existence of the important direction which is used to reduce the dimensionality of the reliability problem. In order to calculate the design point correctly, the improved Hasofer–Lind–Rackwitz–Fiessler method was extended with a procedure for determining the perturbation in the calculation of the gradient vector by finite differences according to the numerical precision of the limit state function. The dimensionality reduction method saves the numbers of calling limit state function a lot and has the same accuracy as Monte Carlo method. Examples involving single and multiple degree-of-freedom nonlinear vibration systems were used to demonstrate the approach.

  20. Two-dimensional cancellation neglect a review and suggested method of analysis.

    Science.gov (United States)

    Mark, V W; Monson, N

    1997-09-01

    Directional bias on cancellation has thus far not been standardized. While cancellation tasks are primarily used to assess lateral performance asymmetries, they may also reveal two-dimensional (i.e., combined lateral and radial) neglect patterns. We propose a method to evaluate and report cancellation neglect regardless of whether the neglect pattern is strictly unilateral or two-dimensional. Our method establishes the location of the geographic center of all neglected stimuli relative to the page center by averaging their Cartesian coordinates. This "neglect center" is reported in polar coordinates to indicate its distance and direction from the page center. We apply our method to published examples of two-dimensional neglect. We find that neglect centers from different cancellation performances may not be statistically distinct even though they may occupy different quadrants. In addition, the net direction of neglect found by the coordinate method may differ from that inferred from measuring differences in quadrant omission totals. The suitability of the coordinate vs. the quadrant method will depend on the mechanism hypothesized for visuospatial exploration under particular test conditions. Using both approaches may detect different attentional biases operating during the same task. The coordinate method is appropriate for conventional cancellation testing. By incorporating the precise locations of all neglected stimuli and determining the net neglect direction in two dimensions, the technique may stimulate more comprehensive explanations for directional bias.

  1. Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations

    Science.gov (United States)

    Casulli, V.; Cheng, R.T.

    1990-01-01

    In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.

  2. Interpretation of Spatial data sets and Evaluation of Interpolation methods using Two- dimensional Least-squares Spectral Analysis

    Science.gov (United States)

    Nikkhoo, M.; Goli, M.; Najafi Alamdari, M.; Naeimi, M.

    2008-05-01

    Two-dimensional spectral analysis of spatial data is known as a handy tool for illustrating such data in frequency domain in all earth science disciplines. Conventional methods of spectral analysis (i.e. Fourier method) need an equally spaced data set which is, however, rarely possible in reality. In this paper we developed the least-squares spectral analysis in two dimensions. The method was originally proposed by Vanicek 1969 to be applied to one- dimensional irregularly sampled data. Applying this method to two-dimensional irregularly sampled data also results in an undistorted power spectrum, since during the computation of which, no interpolation process is encountered. As a case study two-dimensional spectrum of GPS leveling data over North America were computed as well as spectrum of Geoid undulations derived from EIGEN-GL04C model. Due to the derived spectra of two data sets, a very good fitness of two is shown in long and medium wavelengths. We also computed the power spectrum of gravity anomalies over North America and compared it with the other ones derived from interpolated data (by different methods). Spectral behavior of these methods is discussed as well.

  3. Three dimensional thermal hydraulic characteristic analysis of reactor core based on porous media method

    International Nuclear Information System (INIS)

    Chen, Ronghua; Tian, Maolin; Chen, Sen; Tian, Wenxi; Su, G.H.; Qiu, Suizheng

    2017-01-01

    Highlights: • This study constructed a full CFD model for the RPV of a PWR. • The reactor core was simplified using the porous model in CFX. • The CFX simulation result was in good agreement with the scaled test and design values. • The analysis of the SGTR accident was performed. - Abstract: Thermal-hydraulic performance in the reactor core was an essential factor in the nuclear power plant design. In this study, we analyzed the three-dimensional (3-D) thermal-hydraulic characteristic of reactor core based on porous media method. Firstly, a 3-D rector pressure vessel (RPV) model was built, including the inlet leg nozzle, downcomer, lower plenum, reactor core, upper plenum and outlet leg nozzle. Porous media model was used to simplify the reactor core and upper plenum. The commercial CFD code ANSYS CFX was employed to solve the governing equations and provide the 3-D local velocity, temperature and pressure field. After appropriate parameters and turbulent model being carefully selected, the simulation was validated against the 1:5 scaled steady-state hydraulic test. The predicted hydraulic parameters (normalized flowrate distribution and pressure drop) were in good agreement with the test results. And the predicted thermal parameters agreed well with the designed values. The validation indicated that this method was practicable in analyzing the 3-D thermal-hydraulic phenomena in the RPV. Finally, the thermal-hydraulic features in reactor core were analyzed under the condition of the Steam Generator Tube Rupture (SGTR) accident. The simulation results showed that the coolant temperature increased gradually from the center to the periphery in the reactor core in the accident. But the temperature decreased to safety level rapidly after the reactor shutdown and safety injection operation. The reactor core could keep in a safe state if appropriate safety operations were performed after accidents.

  4. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    Science.gov (United States)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  5. Comparison of alveolar ridge preservation method using three dimensional micro-computed tomographic analysis and two dimensional histometric evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Seok; Kim, Sung Tae; Oh, Seung Hee; Park, Hee Jung; Lee, Sophia; Kim, Taeil; Lee, Young Kyu; Heo, Min Suk [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2014-06-15

    This study evaluated the efficacy of alveolar ridge preservation methods with and without primary wound closure and the relationship between histometric and micro-computed tomographic (CT) data. Porcine hydroxyapatite with polytetrafluoroethylene membrane was implanted into a canine extraction socket. The density of the total mineralized tissue, remaining hydroxyapatite, and new bone was analyzed by histometry and micro-CT. The statistical association between these methods was evaluated. Histometry and micro-CT showed that the group which underwent alveolar preservation without primary wound closure had significantly higher new bone density than the group with primary wound closure (P<0.05). However, there was no significant association between the data from histometry and micro-CT analysis. These results suggest that alveolar ridge preservation without primary wound closure enhanced new bone formation more effectively than that with primary wound closure. Further investigation is needed with respect to the comparison of histometry and micro-CT analysis.

  6. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    plate, using calibrated features measured by CMM and using a database of reference values – are presented and applied within a case study. The investigation was performed on a dose engine component of an insulin pen, for which several dimensional measurands were defined. The component has a complex...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  7. A hybrid method for quasi-three-dimensional slope stability analysis in a municipal solid waste landfill.

    Science.gov (United States)

    Yu, L; Batlle, F

    2011-12-01

    Limited space for accommodating the ever increasing mounds of municipal solid waste (MSW) demands the capacity of MSW landfill be maximized by building landfills to greater heights with steeper slopes. This situation has raised concerns regarding the stability of high MSW landfills. A hybrid method for quasi-three-dimensional slope stability analysis based on the finite element stress analysis was applied in a case study at a MSW landfill in north-east Spain. Potential slides can be assumed to be located within the waste mass due to the lack of weak foundation soils and geosynthetic membranes at the landfill base. The only triggering factor of deep-seated slope failure is the higher leachate level and the relatively high and steep slope in the front. The valley-shaped geometry and layered construction procedure at the site make three-dimensional slope stability analyses necessary for this landfill. In the finite element stress analysis, variations of leachate level during construction and continuous settlement of the landfill were taken into account. The "equivalent" three-dimensional factor of safety (FoS) was computed from the individual result of the two-dimensional analysis for a series of evenly spaced cross sections within the potential sliding body. Results indicate that the hybrid method for quasi-three-dimensional slope stability analysis adopted in this paper is capable of locating roughly the spatial position of the potential sliding mass. This easy to manipulate method can serve as an engineering tool in the preliminary estimate of the FoS as well as the approximate position and extent of the potential sliding mass. The result that FoS obtained from three-dimensional analysis increases as much as 50% compared to that from two-dimensional analysis implies the significance of the three-dimensional effect for this study-case. Influences of shear parameters, time elapse after landfill closure, leachate level as well as unit weight of waste on FoS were also

  8. Refined fourier-transform method of analysis of full two-dimensional digitized interferograms.

    Science.gov (United States)

    Lovrić, Davorin; Vucić, Zlatko; Gladić, Jadranko; Demoli, Nazif; Mitrović, Slobodan; Milas, Mirko

    2003-03-10

    A refined Fourier-transform method of analysis of interference patterns is presented. The refinements include a method of automatic background subtraction and a way of treating the problem of heterodyning. The method proves particularly useful for analysis of long sequences of interferograms.

  9. Band Structures Analysis Method of Two-Dimensional Phononic Crystals Using Wavelet-Based Elements

    Directory of Open Access Journals (Sweden)

    Mao Liu

    2017-10-01

    Full Text Available A wavelet-based finite element method (WFEM is developed to calculate the elastic band structures of two-dimensional phononic crystals (2DPCs, which are composed of square lattices of solid cuboids in a solid matrix. In a unit cell, a new model of band-gap calculation of 2DPCs is constructed using plane elastomechanical elements based on a B-spline wavelet on the interval (BSWI. Substituting the periodic boundary conditions (BCs and interface conditions, a linear eigenvalue problem dependent on the Bloch wave vector is derived. Numerical examples show that the proposed method performs well for band structure problems when compared with those calculated by traditional FEM. This study also illustrates that filling fractions, material parameters, and incline angles of a 2DPC structure can cause band-gap width and location changes.

  10. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    Science.gov (United States)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  11. Method for aortic wall strain measurement with three-dimensional ultrasound speckle tracking and fitted finite element analysis.

    Science.gov (United States)

    Karatolios, Konstantinos; Wittek, Andreas; Nwe, Thet Htar; Bihari, Peter; Shelke, Amit; Josef, Dennis; Schmitz-Rixen, Thomas; Geks, Josef; Maisch, Bernhard; Blase, Christopher; Moosdorf, Rainer; Vogt, Sebastian

    2013-11-01

    Aortic wall strains are indicators of biomechanical changes of the aorta due to aging or progressing pathologies such as aortic aneurysm. We investigated the potential of time-resolved three-dimensional ultrasonography coupled with speckle-tracking algorithms and finite element analysis as a novel method for noninvasive in vivo assessment of aortic wall strain. Three-dimensional volume datasets of 6 subjects without cardiovascular risk factors and 2 abdominal aortic aneurysms were acquired with a commercial real time three-dimensional echocardiography system. Longitudinal and circumferential strains were computed offline with high spatial resolution using a customized commercial speckle-tracking software and finite element analysis. Indices for spatial heterogeneity and systolic dyssynchrony were determined for healthy abdominal aortas and abdominal aneurysms. All examined aortic wall segments exhibited considerable heterogenous in-plane strain distributions. Higher spatial resolution of strain imaging resulted in the detection of significantly higher local peak strains (p ≤ 0.01). In comparison with healthy abdominal aortas, aneurysms showed reduced mean strains and increased spatial heterogeneity and more pronounced temporal dyssynchrony as well as delayed systole. Three-dimensional ultrasound speckle tracking enables the analysis of spatially highly resolved strain fields of the aortic wall and offers the potential to detect local aortic wall motion deformations and abnormalities. These data allow the definition of new indices by which the different biomechanical properties of healthy aortas and aortic aneurysms can be characterized. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Application of finite-element method to three-dimensional nuclear reactor analysis

    International Nuclear Information System (INIS)

    Cheung, K.Y.

    1985-01-01

    The application of the finite element method to solve a realistic one-or-two energy group, multiregion, three-dimensional static neutron diffusion problem is studied. Linear, quadratic, and cubic serendipity box-shape elements are used. The resulting sets of simultaneous algebraic equations with thousands of unknowns are solved by the conjugate gradient method, without forming the large coefficient matrix explicitly. This avoids the complicated data management schemes to store such a large coefficient matrix. Three finite-element computer programs: FEM-LINEAR, FEM-QUADRATIC and FEM-CUBIC were developed, using the linear, quadratic, and cubic box-shape elements respectively. They are self-contained, using simple nodal labeling schemes, without the need for separate finite element mesh generating routines. The efficiency and accuracy of these computer programs are then compared among themselves, and with other computer codes. The cubic element model is not recommended for practical usage because it gives almost identical results as the quadratic model, but it requires considerably longer computation time. The linear model is less accurate than the quadratic model, but it requires much shorter computation time. For a large 3-D problem, the linear model is to be preferred since it gives acceptable accuracy. The quadratic model may be used if improved accuracy is desired

  13. A computationally efficient hypothesis testing method for epistasis analysis using multifactor dimensionality reduction.

    Science.gov (United States)

    Pattin, Kristine A; White, Bill C; Barney, Nate; Gui, Jiang; Nelson, Heather H; Kelsey, Karl T; Andrew, Angeline S; Karagas, Margaret R; Moore, Jason H

    2009-01-01

    Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free data mining method for detecting, characterizing, and interpreting epistasis in the absence of significant main effects in genetic and epidemiologic studies of complex traits such as disease susceptibility. The goal of MDR is to change the representation of the data using a constructive induction algorithm to make nonadditive interactions easier to detect using any classification method such as naïve Bayes or logistic regression. Traditionally, MDR constructed variables have been evaluated with a naïve Bayes classifier that is combined with 10-fold cross validation to obtain an estimate of predictive accuracy or generalizability of epistasis models. Traditionally, we have used permutation testing to statistically evaluate the significance of models obtained through MDR. The advantage of permutation testing is that it controls for false positives due to multiple testing. The disadvantage is that permutation testing is computationally expensive. This is an important issue that arises in the context of detecting epistasis on a genome-wide scale. The goal of the present study was to develop and evaluate several alternatives to large-scale permutation testing for assessing the statistical significance of MDR models. Using data simulated from 70 different epistasis models, we compared the power and type I error rate of MDR using a 1,000-fold permutation test with hypothesis testing using an extreme value distribution (EVD). We find that this new hypothesis testing method provides a reasonable alternative to the computationally expensive 1,000-fold permutation test and is 50 times faster. We then demonstrate this new method by applying it to a genetic epidemiology study of bladder cancer susceptibility that was previously analyzed using MDR and assessed using a 1,000-fold permutation test.

  14. Seismic response analysis of soil-structure interactive system using a coupled three-dimensional FE-IE method

    International Nuclear Information System (INIS)

    Ryu, Jeong-Soo; Seo, Choon-Gyo; Kim, Jae-Min; Yun, Chung-Bang

    2010-01-01

    This paper proposes a slightly new three-dimensional radial-shaped dynamic infinite elements fully coupled to finite elements for an analysis of soil-structure interaction system in a horizontally layered medium. We then deal with a seismic analysis technique for a three-dimensional soil-structure interactive system, based on the coupled finite-infinite method in frequency domain. The dynamic infinite elements are simulated for the unbounded domain with wave functions propagating multi-generated wave components. The accuracy of the dynamic infinite element and effectiveness of the seismic analysis technique may be demonstrated through a typical compliance analysis of square surface footing, an L-shaped mat concrete footing on layered soil medium and two kinds of practical seismic analysis tests. The practical analyses are (1) a site response analysis of the well-known Hualien site excited by all travelling wave components (primary, shear, Rayleigh waves) and (2) a generation of a floor response spectrum of a nuclear power plant. The obtained dynamic results show good agreement compared with the measured response data and numerical values of other soil-structure interaction analysis package.

  15. One dimensional model of martensitic transformation solved by homotopy analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Xuan, Chen; Huo, Yongzhong [Fudan Univ., Shanghai (China). Inst. of Mechanics and Engineering Science; Peng, Cheng [Rice Univ., Houston, TX (United States). Dept. of Mechanical Engineering and Materials Science

    2012-05-15

    The homotopy analysis method (HAM) is applied to solve a nonlinear ordinary differential equation describing certain phase transition problem in solids. Both bifurcation conditions and analytical solutions are obtained simultaneously for the Euler-Lagrange equation of the martensitic transformation. HAM is capable of providing an analytical expression for the bifurcation condition to judge the occurrence of the phase transition, while other numerical techniques have difficulties in bifurcation analysis. The convergence of the analytical solutions on the one hand can be adjusted by the auxiliary parameter and on the other hand is always obtainable for all relevant physical parameters satisfying the bifurcation condition. (orig.)

  16. Three-dimensional analysis of chevron-notched specimens by boundary integral method

    Science.gov (United States)

    Mendelson, A.; Ghosn, L.

    1983-01-01

    The chevron-notched short bar and short rod specimens was analyzed by the boundary integral equations method. This method makes use of boundary surface elements in obtaining the solution. The boundary integral models were composed of linear triangular and rectangular surface segments. Results were obtained for two specimens with width to thickness ratios of 1.45 and 2.00 and for different crack length to width ratios ranging from 0.4 to 0.7. Crack opening displacement and stress intensity factors determined from displacement calculations along the crack front and compliance calculations were compared with experimental values and with finite element analysis.

  17. At-wavelength metrology using the moiré fringe analysis method based on a two dimensional grating interferometer

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hongchang, E-mail: hongchang.wang@diamond.ac.uk [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Berujon, Sebastien [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); European Synchrotron Radiation Facility, BP-220, F-38043 Grenoble (France); Pape, Ian [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Rutishauser, Simon; David, Christian [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); Sawhney, Kawal [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2013-05-11

    A two-dimensional (2D) grating interferometer was used to perform at-wavelength metrology. A Fast Fourier Transform (FFT) of the interferograms recovers the differential X-ray beam phase in two orthogonal directions simultaneously. As an example, the X-ray wavefronts downstream from a Fresnel Zone plate were measured using the moiré fringe analysis method, which requires only a single image. The rotating shearing interferometer technique for moiré fringe analysis was extended from one dimension to two dimensions to carry out absolute wavefront metrology. In addition, the 2D moiré fringes were extrapolated using Gerchberg's method to reduce the boundary artifacts. The advantages and limitations of the phase-stepping method and the moiré fringe analysis method are also discussed. -- Highlights: ► A rapid and sensitive strip test for CPPU (forchlorfenuron) detection is reported. ► Carbon nanoparticles were used for antibody labelling. ► A common flatbed scanner was employed to the quantitate strip spots. ► The new method was successfully applied to the analysis of the field samples.

  18. At-wavelength metrology using the moiré fringe analysis method based on a two dimensional grating interferometer

    International Nuclear Information System (INIS)

    Wang, Hongchang; Berujon, Sebastien; Pape, Ian; Rutishauser, Simon; David, Christian; Sawhney, Kawal

    2013-01-01

    A two-dimensional (2D) grating interferometer was used to perform at-wavelength metrology. A Fast Fourier Transform (FFT) of the interferograms recovers the differential X-ray beam phase in two orthogonal directions simultaneously. As an example, the X-ray wavefronts downstream from a Fresnel Zone plate were measured using the moiré fringe analysis method, which requires only a single image. The rotating shearing interferometer technique for moiré fringe analysis was extended from one dimension to two dimensions to carry out absolute wavefront metrology. In addition, the 2D moiré fringes were extrapolated using Gerchberg's method to reduce the boundary artifacts. The advantages and limitations of the phase-stepping method and the moiré fringe analysis method are also discussed. -- Highlights: ► A rapid and sensitive strip test for CPPU (forchlorfenuron) detection is reported. ► Carbon nanoparticles were used for antibody labelling. ► A common flatbed scanner was employed to the quantitate strip spots. ► The new method was successfully applied to the analysis of the field samples

  19. Dimensional Analysis...in Calculus.

    Science.gov (United States)

    Brody, Burt

    1994-01-01

    Discusses using dimensional analysis in beginning calculus to help the students determine if the correct exponents and factors are being used. Suggests that dimensional analysis may be very useful but must be used with care. (MVL)

  20. Dimensional analysis made simple

    International Nuclear Information System (INIS)

    Lira, Ignacio

    2013-01-01

    An inductive strategy is proposed for teaching dimensional analysis to second- or third-year students of physics, chemistry, or engineering. In this strategy, Buckingham's theorem is seen as a consequence and not as the starting point. In order to concentrate on the basics, the mathematics is kept as elementary as possible. Simple examples are suggested for classroom demonstrations of the power of the technique and others are put forward for homework or experimentation, but instructors are encouraged to produce examples of their own. (paper)

  1. Evaluation of sample preparation methods for the analysis of papaya leaf proteins through two-dimensional gel electrophoresis.

    Science.gov (United States)

    Rodrigues, Silas Pessini; Ventura, José Aires; Zingali, R B; Fernandes, P M B

    2009-01-01

    A variety of sample preparation protocols for plant proteomic analysis using two-dimensional gel electrophoresis (2-DE) have been reported. However, they usually have to be adapted and further optimised for the analysis of plant species not previously studied. This work aimed to evaluate different sample preparation protocols for analysing Carica papaya L. leaf proteins through 2-DE. Four sample preparation methods were tested: (1) phenol extraction and methanol-ammonium acetate precipitation; (2) no precipitation fractionation; and the traditional trichloroacetic acid-acetone precipitation either (3) with or (4) without protein fractionation. The samples were analysed for their compatibility with SDS-PAGE (1-DE) and 2-DE. Fifteen selected protein spots were trypsinised and analysed by matrix-assisted laser desorption/ionisation time-of-flight tandem mass spectrometry (MALDI-TOF-MS/MS), followed by a protein search using the NCBInr database to accurately identify all proteins. Methods number 3 and 4 resulted in large quantities of protein with good 1-DE separation and were chosen for 2-DE analysis. However, only the TCA method without fractionation (no. 4) proved to be useful. Spot number and resolution advances were achieved, which included having an additional solubilisation step in the conventional TCA method. Moreover, most of the theoretical and experimental protein molecular weight and pI data had similar values, suggesting good focusing and, most importantly, limited protein degradation. The described sample preparation method allows the proteomic analysis of papaya leaves by 2-DE and mass spectrometry (MALDI-TOF-MS/MS). The methods presented can be a starting point for the optimisation of sample preparation protocols for other plant species.

  2. Development of multi dimensional analysis code for containment safety and performance based on staggered semi-implicit finite volume method

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Hwang, Su Hyun; Han, Tae Young; Lee, Byung Chul; Byun, Choong Sup

    2009-01-01

    A solver of 3-dimensional thermal hydraulic analysis code for a large building having multi rooms such as reactor containment was developed based on 2-phase and 3-field conservation equations. Three fields mean gas, continuous liquid, and dispersed drop. Gas field includes steam, air and hydrogen. Gas motion equation and state equation also considered. Homogeneous and equilibrium conditions were assumed for gas motion equation. Source terms related with phase change were explicitly expressed for the implicit scheme. Resultantly, total 17 independent equations were setup, and total 17 primitive unknowns were identified. Numerical scheme followed the FVM (Finite Volume Method) based on staggered orthogonal structured grid and semi-implicit method. Staggered grid system produces staggered numerical cells of a scalar cell and a vector cell. The porosity method was adopted for easy handling the complex structures inside a computational cell. Such porosity method has been known to be very effective in reducing mesh numbers and acquiring accurate results in spite of fewer meshes. In the actual programming C++ language of OOP (Object Oriented Programming) was used. The code developed by OOP has the features such as the information hiding, encapsulation, modularity and inheritance. These can offer code developers the more explicit and clearer development method. Classes were designed. Cell and Face, and Volume and Component are the bases of the largest Class, System. Class Solver was designed in order to run the solver. Sample runs showed physically reasonable results. The foundation of code was setup through a series of numerical development. (author)

  3. A Two-Dimensional Unsteady Method of Characteristics Analysis of Fluid Flow Problems - MCDU 43

    Science.gov (United States)

    1975-07-01

    N)>ZBN(KtLfHnt .(RBHIK.L+ltMJ-RBHfK.LtN))) ZPR«-(RMA-RBM(KvLtP))*SINtANGLEU .(ZMA-ZBH(KtLtMn*COS(ANGLE) BPp = (PMA-RPMIK,L,M))*CCS(ANr,LFI*(ZMA...as a standard the exact solution of Sedov’s cylindrical-blast-wave problem, see -4 Ref. F7 . An initial time-plane is chosen at a time of 1.591 x 10... F7 . Chou, P.C., and Karpp, R.R., "Solution of Blast Waves by the Method of Characteristics", DIT Report No. 125-7, Drexel Institute of Technology

  4. Multi-dimensional Fokker-Planck equation analysis using the modified finite element method

    Czech Academy of Sciences Publication Activity Database

    Náprstek, Jiří; Král, Radomil

    2016-01-01

    Roč. 744, č. 1 (2016), č. článku 012177. ISSN 1742-6588. [International Conference on Motion and Vibration Control (MOVIC 2016) /13./ and International Conference on Recent Advances in Structural Dynamics (RASD 2016) /12./. Southampton, 04.07.2016-06.07.2016] R&D Projects: GA ČR(CZ) GP14-34467P; GA ČR(CZ) GA15-01035S Institutional support: RVO:68378297 Keywords : Fokker-Planck equation * finite element method * single degree of freedom systems (SDOF) Subject RIV: JM - Building Engineering http://iopscience.iop.org/article/10.1088/1742-6596/744/1/012177

  5. Method And Apparatus For Two Dimensional Surface Property Analysis Based On Boundary Measurement

    Science.gov (United States)

    Richardson, John G.

    2005-11-15

    An apparatus and method for determining properties of a conductive film is disclosed. A plurality of probe locations selected around a periphery of the conductive film define a plurality of measurement lines between each probe location and all other probe locations. Electrical resistance may be measured along each of the measurement lines. A lumped parameter model may be developed based on the measured values of electrical resistance. The lumped parameter model may be used to estimate resistivity at one or more selected locations encompassed by the plurality of probe locations. The resistivity may be extrapolated to other physical properties if the conductive film includes a correlation between resistivity and the other physical properties. A profile of the conductive film may be developed by determining resistivity at a plurality of locations. The conductive film may be applied to a structure such that resistivity may be estimated and profiled for the structure's surface.

  6. Feasibility and Accuracy of Automated Software for Transthoracic Three-Dimensional Left Ventricular Volume and Function Analysis: Comparisons with Two-Dimensional Echocardiography, Three-Dimensional Transthoracic Manual Method, and Cardiac Magnetic Resonance Imaging.

    Science.gov (United States)

    Tamborini, Gloria; Piazzese, Concetta; Lang, Roberto M; Muratori, Manuela; Chiorino, Elisa; Mapelli, Massimo; Fusini, Laura; Ali, Sarah Ghulam; Gripari, Paola; Pontone, Gianluca; Andreini, Daniele; Pepi, Mauro

    2017-11-01

    Recently, a new automated software package (HeartModel) was developed to obtain three-dimensional (3D) left ventricular (LV) volumes using a model-based algorithm (MBA) with a "one-button" simple system and user-adjustable slider. The aims of this study were to verify the feasibility and accuracy of the MBA in comparison with other commonly used imaging techniques in a large unselected population, to evaluate possible accuracy improvements of free operator border adjustments or changes of the slider's default position, and to identify differences in method accuracy related to specific pathologies. This prospective study included consecutive 200 patients. LV volumes and ejection fraction were obtained using the MBA and compared with the two-dimensional biplane method, the 3D full-volume (3DFV) modality, and, in 90 of 200 cases, cardiac magnetic resonance (CMR) measurements. To evaluate the optimal position of the slider with respect to the 3DFV and CMR modalities, a set of threefold cross-validation experiments was performed. Optimized and manually corrected LV volumes obtained using the MBA were also tested. Linear correlation and Bland-Altman analysis were used to assess intertechnique agreement. Automatic volumes were feasible in 194 patients (94.5%), with a mean processing time of 29 ± 10 sec. MBA-derived volumes correlated significantly with all evaluated methods, with slight overestimation of two-dimensional biplane and slight underestimation of CMR measurements. Higher correlations were found between MBA and 3DFV measurements, with negligible differences both in volumes (overestimation) and in LV ejection fraction (underestimation), respectively. Optimization of the user-adjustable slider position improved the correlation and markedly reduced the bias between the MBA and 3DFV or CMR. The accuracy of MBA volumes was lower in some pathologies for incorrect definition of LV endocardium. The MBA is highly feasible, reproducible, and rapid, and it correlates

  7. Detection of viable myocardium by dobutamine stress tagging magnetic resonance imaging with three-dimensional analysis by automatic trace method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Isao [Yatsu Hoken Hospital, Narashino, Chiba (Japan); Watanabe, Shigeru; Masuda, Yoshiaki

    2000-07-01

    The present study attempted to detect the viability of myocardium by quantitative automatic 3-dimensional analysis of the improvement of regional wall motion using an magnetic resonance imaging (MRI) tagging method. Twenty-two subjects with ischemic heart disease who had abnormal wall motion on echocardiography at rest were enrolled. All patients underwent dobutamine stress echocardiography (DSE), coronary arteriography and left ventriculography. The results were compared with those of 7 normal volunteers. MRI studies were done with myocardial tagging using the spatial modulation of magnetization technique. Automatic tracing with an original program was performed, and wall motion was compared before and during dobutamine infusion. The evaluation of myocardial viability with MRI and echocardiography had similar results in 19 (86.4%) of the 22 patients; 20 were studied by positron emission tomography or thallium-201 single photon emission computed tomography for myocardial viability, or studied for improvement of wall motion following coronary intervention. The sensitivity of dobutamine stress MRI (DSMRI) with tagging was 75.9% whereas that of DSE was 65.5%. The specificity of DSMRI was 85.7% (6/7) and that of DSE was 100% (7/7). The accuracy of DSMRI was 77.8% (28/36) and that of DSE 72.2% (26/36). DSMRI was shown to be superior to DSE in terms of evaluation of myocardial viability. (author)

  8. Dimensional analysis in field theory

    International Nuclear Information System (INIS)

    Stevenson, P.M.

    1981-01-01

    Dimensional Transmutation (the breakdown of scale invariance in field theories) is reconciled with the commonsense notions of Dimensional Analysis. This makes possible a discussion of the meaning of the Renormalisation Group equations, completely divorced from the technicalities of renormalisation. As illustrations, I describe some very farmiliar QCD results in these terms

  9. Fracture analysis of one-dimensional hexagonal quasicrystals: Researches of a finite dimension rectangular plate by boundary collocation method

    Energy Technology Data Exchange (ETDEWEB)

    Jiaxing, Cheng; Dongfa, Sheng [Southwest Forestry University, Yunnan (China)

    2017-05-15

    As an important supplement and development to crystallography, the applications about quasicrystal materials have played a core role in many fields, such as manufacturing and the space industry. Due to the sensitivity of quasicrystals to defects, the research on the fracture problem of quasicrystals has attracted a great deal of attention. We present a boundary collocation method to research fracture problems for a finite dimension rectangular one-dimensional hexagonal quasicrystal plate. Because mode I and mode II problems for one- dimensional hexagonal quasicrystals are like that for the classical elastic materials, only the anti-plane problem is discussed in this paper. The correctness of the present numerical method is verified through a comparison of the present results and the existing results. And then, the size effects on stress field, stress intensity factor and energy release rate are discussed in detail. The obtained results can provide valuable references for the fracture behavior of quasicrystals.

  10. Impact of an elastic sphere with an elastic half space revisited: numerical analysis based on the method of dimensionality reduction.

    Science.gov (United States)

    Lyashenko, I A; Popov, V L

    2015-02-16

    An impact of an elastic sphere with an elastic half space under no-slip conditions (infinitely large coefficient of friction) is studied numerically using the method of dimensionality reduction. It is shown that the rebound velocity and angular velocity, written as proper dimensionless variables, are determined by a function of only the ratio of tangential and normal stiffness ("Mindlin-ratio"). The obtained numerical results can be approximated by a simple analytical expression.

  11. A student's guide to dimensional analysis

    CERN Document Server

    Lemons, Don S

    2017-01-01

    This introduction to dimensional analysis covers the methods, history and formalisation of the field, and provides physics and engineering applications. Covering topics from mechanics, hydro- and electrodynamics to thermal and quantum physics, it illustrates the possibilities and limitations of dimensional analysis. Introducing basic physics and fluid engineering topics through the mathematical methods of dimensional analysis, this book is perfect for students in physics, engineering and mathematics. Explaining potentially unfamiliar concepts such as viscosity and diffusivity, the text includes worked examples and end-of-chapter problems with answers provided in an accompanying appendix, which help make it ideal for self-study. Long-standing methodological problems arising in popular presentations of dimensional analysis are also identified and solved, making the book a useful text for advanced students and professionals.

  12. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  13. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, J [Cardiovascular Research Group Physics, University of New England, Armidale, NSW 2351 (Australia); Buick, J M [Department of Mechanical and Design Engineering, University of Portsmouth, Anglesea Building, Anglesea Road, Portsmouth PO1 3DJ (United Kingdom)

    2008-10-21

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  14. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    International Nuclear Information System (INIS)

    Boyd, J; Buick, J M

    2008-01-01

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  15. Three-dimensional contact analysis of coupled surfaces by a novel contact transformation method based on localized Lagrange multipliers

    Directory of Open Access Journals (Sweden)

    Yi-Tsung Lin

    2016-04-01

    Full Text Available Instead of obsessively emphasizing to reduce the number of time increments and reshape the models, a novel surface contact transformation to increase efficiency is presented in this study. Wear on the bearing surfaces was investigated following the coupled regions from the pressure distribution, computed by means of three-dimensional finite element method models; an approximate analytical model and formulation in three-dimensional frictional contact problems based on modified localized Lagrange multiplier method have also been developed and discussed. Understanding wear behavior patterns in mechanical components is a significant task in engineering design. The proposed approach provides a complete and effective solution to the wear problem in a quasi-dynamic manner. However, expensive computing time is needed in the incremental procedures. In this article, an alternative and efficient finite element approach is introduced to reduce the computation costs of wear prediction. Through the successful verification of wear depth and volume loss of the pin-on-plate, block-on-ring, and metal-on-plastic artificial hip joint wear behaviors, the numerical calculations are shown to be both valid and feasible. Furthermore, the results also show that the central processing unit time required by the proposed method is nearly half that of the previous methods without loss of accuracy.

  16. Theoretical background and implementation of the finite element method for multi-dimensional Fokker-Planck equation analysis

    Czech Academy of Sciences Publication Activity Database

    Král, Radomil; Náprstek, Jiří

    2017-01-01

    Roč. 113, November (2017), s. 54-75 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GP14-34467P; GA ČR(CZ) GA15-01035S Institutional support: RVO:68378297 Keywords : Fokker-Planck equation * finite element method * simplex element * multi-dimensional problem * non-symmetric operator Subject RIV: JM - Building Engineering OBOR OECD: Mechanical engineering Impact factor: 3.000, year: 2016 https://www.sciencedirect.com/science/article/pii/S0965997817301904

  17. Magnetic Filed Analysis of Planet-type Magnetic Gear by using 2-Dimensional Finite Element Method(18th MAGDA Conference)

    OpenAIRE

    岡, 克; 戸高, 孝; 榎園, 正人; 長屋, 幸助; Masaru, OKA; Takasi, TODAKA; Masato, ENOKIZONO; Kousuke, NAGAYA; 大分大学; 大分大学; 大分大学; 大分産業創造機構

    2010-01-01

    This paper presents results of two-dimensional magnetic field analysis of a planet-type magnetic gear by using the finite element method. The magnetic gear can transmit torque without contact, however a lot of complicate-shaped permanent magnets or multi-pole magnetizations are needed to achieve higher torque. in order to improve transmission torque, we applied the flux concentration permanent magnet array to the planet-type magnetic gear. In this paper, influence of the yoke material and the...

  18. Comparison of alveolar ridge preservation methods using three-dimensional micro-computed tomographic analysis and two-dimensional histometric evaluation.

    Science.gov (United States)

    Park, Young-Seok; Kim, Sungtae; Oh, Seung-Hee; Park, Hee-Jung; Lee, Sophia; Kim, Tae-Il; Lee, Young-Kyu; Heo, Min-Suk

    2014-06-01

    This study evaluated the efficacy of alveolar ridge preservation methods with and without primary wound closure and the relationship between histometric and micro-computed tomographic (CT) data. Porcine hydroxyapatite with polytetrafluoroethylene membrane was implanted into a canine extraction socket. The density of the total mineralized tissue, remaining hydroxyapatite, and new bone was analyzed by histometry and micro-CT. The statistical association between these methods was evaluated. Histometry and micro-CT showed that the group which underwent alveolar preservation without primary wound closure had significantly higher new bone density than the group with primary wound closure (Palveolar ridge preservation without primary wound closure enhanced new bone formation more effectively than that with primary wound closure. Further investigation is needed with respect to the comparison of histometry and micro-CT analysis.

  19. Three-dimensional transport coefficient model and prediction-correction numerical method for thermal margin analysis of PWR cores

    International Nuclear Information System (INIS)

    Chiu, C.

    1981-01-01

    Combustion Engineering Inc. designs its modern PWR reactor cores using open-core thermal-hydraulic methods where the mass, momentum and energy equations are solved in three dimensions (one axial and two lateral directions). The resultant fluid properties are used to compute the minimum Departure from Nuclear Boiling Ratio (DNBR) which ultimately sets the power capability of the core. The on-line digital monitoring and protection systems require a small fast-running algorithm of the design code. This paper presents two techniques used in the development of the on-line DNB algorithm. First, a three-dimensional transport coefficient model is introduced to radially group the flow subchannel into channels for the thermal-hydraulic fluid properties calculation. Conservation equations of mass, momentum and energy for this channels are derived using transport coefficients to modify the calculation of the radial transport of enthalpy and momentum. Second, a simplified, non-iterative numerical method, called the prediction-correction method, is applied together with the transport coefficient model to reduce the computer execution time in the determination of fluid properties. Comparison of the algorithm and the design thermal-hydraulic code shows agreement to within 0.65% equivalent power at a 95/95 confidence/probability level for all normal operating conditions of the PWR core. This algorithm accuracy is achieved with 1/800th of the computer processing time of its parent design code. (orig.)

  20. Parallelization method for three dimensional MOC calculation

    International Nuclear Information System (INIS)

    Zhang Zhizhu; Li Qing; Wang Kan

    2013-01-01

    A parallelization method based on angular decomposition for the three dimensional MOC was designed. To improve the parallel efficiency, the directions were pre-grouped and the groups were assembled to minimize the communication. The improved parallelization method was applied to the three dimensional MOC code TCM. The numerical results show that the calculation results of parallelization method are agreed with serial calculation results. The parallel efficiency gets obvious increase after the communication optimized and load balance. (authors)

  1. Evaluation of different protein extraction methods for banana (Musa spp.) root proteome analysis by two-dimensional electrophoresis.

    Science.gov (United States)

    Vaganan, M Mayil; Sarumathi, S; Nandakumar, A; Ravi, I; Mustaffa, M M

    2015-02-01

    Four protocols viz., the trichloroacetic acid-acetone (TCA), phenol-ammonium acetate (PAA), phenol/SDS-ammonium acetate (PSA) and trisbase-acetone (TBA) were evaluated with modifications for protein extraction from banana (Grand Naine) roots, considered as recalcitrant tissues for proteomic analysis. The two-dimensional electrophoresis (2-DE) separated proteins were compared based on protein yield, number of resolved proteins, sum of spot quantity, average spot intensity and proteins resolved in 4-7 pI range. The PAA protocol yielded more proteins (0.89 mg/g of tissues) and protein spots (584) in 2-DE gel than TCA and other protocols. Also, the PAA protocol was superior in terms of sum of total spot quantity and average spot intensity than TCA and other protocols, suggesting phenol as extractant and ammonium acetate as precipitant of proteins were the most suitable for banana rooteomics analysis by 2-DE. In addition, 1:3 ratios of root tissue to extraction buffer and overnight protein precipitation were most efficient to obtain maximum protein yield.

  2. FRACTAL DIMENSIONALITY ANALYSIS OF MAMMARY GLAND THERMOGRAMS

    OpenAIRE

    Yu. E. Lyah; V. G. Guryanov; E. A. Yakobson

    2016-01-01

    Thermography may enable early detection of a cancer tumour within a mammary gland at an early, treatable stage of the illness, but thermogram analysis methods must be developed to achieve this goal. This study analyses the feasibility of applying the Hurst exponent readings algorithm for evaluation of the high dimensionality fractals to reveal any possible difference between normal thermograms (NT) and malignant thermograms (MT).

  3. Proteomic analysis of docetaxel resistance in human nasopharyngeal carcinoma cells using the two-dimensional gel electrophoresis method.

    Science.gov (United States)

    Peng, Xingchen; Gong, Fengming M; Ren, Min; Ai, Ping; Wu, ShaoYong; Tang, Jie; Hu, XiaoLin

    2016-09-01

    Docetaxel-based chemotherapy has been recommended for advanced nasopharyngeal carcinoma (NPC). However, treatment failure often occurs because of acquired drug resistance. In this study, a docetaxel-resistant NPC cell line CNE-2R was established with increasing doses of docetaxel for more than 6 months. Two-dimensional gel electrophoresis and ESI-Q-TOF-MS were used to compare the differential expression of docetaxel-resistance-associated proteins between human NPC CNE-2 cells and docetaxel-resistant CNE-2R cells. As a result, 24 differentially expressed proteins were identified, including 11 proteins with increased expression and 13 proteins with decreased expression. These proteins function in diverse biological processes such as metabolism, signal transduction, calcium ion binding, immune response, proteolysis, and so on. Among these, α-enolase (ENO1), significantly upregulated in CNE-2R, was selected for detailed analysis. Inhibition of ENO1 by shRNA restored CNE-2R cells' sensitivity to docetaxel. Moreover, overexpression of ENO1 could facilitate the development of acquired resistance of docetaxel in CNE-2 cells. Western blot and reverse-transcription PCR data of clinical samples confirmed that α-enolase was upregulated in docetaxel-resistant human NPC tissues. Finding such proteins might improve interpretation of the molecular mechanisms leading to the acquisition of docetaxel chemoresistance.

  4. A robust multifactor dimensionality reduction method for detecting gene-gene interactions with application to the genetic analysis of bladder cancer susceptibility

    Science.gov (United States)

    Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.

    2010-01-01

    A central goal of human genetics is to identify and characterize susceptibility genes for common complex human diseases. An important challenge in this endeavor is the modeling of gene-gene interaction or epistasis that can result in non-additivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as machine learning alternative to parametric logistic regression for detecting interactions in absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modeling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher’s Exact Test rather than a predetermined threshold. The advantage of this approach is that only those genotype combinations that are determined to be statistically significant are considered in the MDR analysis. We use two simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. PMID:21091664

  5. A robust multifactor dimensionality reduction method for detecting gene-gene interactions with application to the genetic analysis of bladder cancer susceptibility.

    Science.gov (United States)

    Gui, Jiang; Andrew, Angeline S; Andrews, Peter; Nelson, Heather M; Kelsey, Karl T; Karagas, Margaret R; Moore, Jason H

    2011-01-01

    A central goal of human genetics is to identify susceptibility genes for common human diseases. An important challenge is modelling gene-gene interaction or epistasis that can result in nonadditivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as a machine learning alternative to parametric logistic regression for detecting interactions in the absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modelling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher's Exact Test rather than a predetermined threshold. The advantage of this approach is that only statistically significant genotype combinations are considered in the MDR analysis. We use simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. © 2010 The Authors Annals of Human Genetics © 2010 Blackwell Publishing Ltd/University College London.

  6. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    Energy Technology Data Exchange (ETDEWEB)

    Webster, Clayton; Tempone, Raul (Florida State University, Tallahassee, FL); Nobile, Fabio (Politecnico di Milano, Italy)

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  7. Geodetic Inversion Analysis Method of Coseismic Slip Distribution Using a Three-dimensional Finite Element High-fidelity Model

    Science.gov (United States)

    Agata, R.; Ichimura, T.; Hirahara, K.; Hori, T.; Hyodo, M.; Hori, M.

    2013-12-01

    Many studies have focused on geodetic inversion analysis method of coseismic slip distribution with combination of observation data of coseismic crustal deformation on the ground and simplified crustal models such like analytical solution in elastic half-space (Okada, 1985). On the other hand, displacements on the seafloor or near trench axes due to actual earthquakes has been observed by seafloor observatories (e.g. the 2011 Tohoku-oki Earthquake (Tohoku Earthquake) (Sato et. al. 2011) (Kido et. al. 2011)). Also, some studies on tsunamis due to the Tohoku Earthquake indicate that large fault slips near the trench axis may have occurred. Those facts suggest that crustal models considering complex geometry and heterogeneity of the material property near the trench axis should be used for geodetic inversion analysis. Therefore, our group has developed a mesh generation method for finite element models of the Japanese Islands of higher fidelity and a fast crustal deformation analysis method for the models. Degree-of-freedom of the models generated by this method is about 150 million. In this research, the method is extended for inversion analyses of coseismic slip distribution. Since inversion analyses need computation of hundreds of slip response functions due to a unit fault slip assigned for respective divided cells on the fault, parallel computing environment is used. Plural crustal deformation analyses are simultaneously run in a Message Passing Interface (MPI) job. In the job, dynamic load balancing is implemented so that a better parallel efficiency is obtained. Submitting the necessary number of serial job of our previous method is also possible, but the proposed method needs less computation time, places less stress on file systems, and allows simpler job management. A method for considering the fault slip right near the trench axis is also developed. As the displacement distribution of unit fault slip for computing response function, 3rd order B

  8. Stochastic and infinite dimensional analysis

    CERN Document Server

    Carpio-Bernido, Maria; Grothaus, Martin; Kuna, Tobias; Oliveira, Maria; Silva, José

    2016-01-01

    This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit’s 75th birthday and celebrate his pioneering and ongoing work in these fields.

  9. Three-dimensional Imaging Methods for Quantitative Analysis of Facial Soft Tissues and Skeletal Morphology in Patients with Orofacial Clefts: A Systematic Review

    NARCIS (Netherlands)

    Kuijpers, M.A.R.; Chiu, Y.T.; Nada, R.M.; Carels, C.E.L.; Fudalej, P.S.

    2014-01-01

    BACKGROUND: Current guidelines for evaluating cleft palate treatments are mostly based on two-dimensional (2D) evaluation, but three-dimensional (3D) imaging methods to assess treatment outcome are steadily rising. OBJECTIVE: To identify 3D imaging methods for quantitative assessment of soft tissue

  10. A novel normalization method based on principal component analysis to reduce the effect of peak overlaps in two-dimensional correlation spectroscopy

    Science.gov (United States)

    Wang, Yanwei; Gao, Wenying; Wang, Xiaogong; Yu, Zhiwu

    2008-07-01

    Two-dimensional correlation spectroscopy (2D-COS) has been widely used to separate overlapped spectroscopic bands. However, band overlap may sometimes cause misleading results in the 2D-COS spectra, especially if one peak is embedded within another peak by the overlap. In this work, we propose a new normalization method, based on principal component analysis (PCA). For each spectrum under discussion, the first principal component of PCA is simply taken as the normalization factor of the spectrum. It is demonstrated that the method works well with simulated dynamic spectra. Successful result has also been obtained from the analysis of an overlapped band in the wavenumber range 1440-1486 cm -1 for the evaporation process of a solution containing behenic acid, methanol, and chloroform.

  11. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    Science.gov (United States)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-07-01

    A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.

  12. FRACTAL DIMENSIONALITY ANALYSIS OF MAMMARY GLAND THERMOGRAMS

    Directory of Open Access Journals (Sweden)

    Yu. E. Lyah

    2016-06-01

    Full Text Available Thermography may enable early detection of a cancer tumour within a mammary gland at an early, treatable stage of the illness, but thermogram analysis methods must be developed to achieve this goal. This study analyses the feasibility of applying the Hurst exponent readings algorithm for evaluation of the high dimensionality fractals to reveal any possible difference between normal thermograms (NT and malignant thermograms (MT.

  13. A comparative study of three commonly used two-dimensional overlay generation methods in bite mark analysis.

    Science.gov (United States)

    Pajnigara, Nilufer Gev; Balpande, Apeksha S; Motwani, Mukta B; Choudhary, Anuraag; Thakur, Samantha; Pajnigara, Natasha G

    2017-01-01

    The present study attempts to compare the bite mark overlays generated by three different methods. The objectives of the study were to compare the three commonly used techniques for overlay generation and to evaluate the interobserver reliability in assessing bite marks by these methods. Overlays were produced from the biting surfaces of six upper and six lower anterior teeth of 30 dental study models using the following three methods: (a) Hand tracing from wax impressions, (b) radiopaque impression method and (c) computer-based method. The computer-based method was found to be the most accurate method. Of the two hand tracing methods, radiopaque wax impression method was better than the wax impression method for overlay generation. It is recommended that forensic odontologists use computerized method, but the use of hand tracing overlays in bite mark comparison cases using radiopaque wax impression method can also be done where sophisticated software and trained persons in forensic odontology are not available.

  14. Three-dimensional imaging methods for quantitative analysis of facial soft tissues and skeletal morphology in patients with orofacial clefts: a systematic review.

    Directory of Open Access Journals (Sweden)

    Mette A R Kuijpers

    Full Text Available BACKGROUND: Current guidelines for evaluating cleft palate treatments are mostly based on two-dimensional (2D evaluation, but three-dimensional (3D imaging methods to assess treatment outcome are steadily rising. OBJECTIVE: To identify 3D imaging methods for quantitative assessment of soft tissue and skeletal morphology in patients with cleft lip and palate. DATA SOURCES: Literature was searched using PubMed (1948-2012, EMBASE (1980-2012, Scopus (2004-2012, Web of Science (1945-2012, and the Cochrane Library. The last search was performed September 30, 2012. Reference lists were hand searched for potentially eligible studies. There was no language restriction. STUDY SELECTION: We included publications using 3D imaging techniques to assess facial soft tissue or skeletal morphology in patients older than 5 years with a cleft lip with/or without cleft palate. We reviewed studies involving the facial region when at least 10 subjects in the sample size had at least one cleft type. Only primary publications were included. DATA EXTRACTION: Independent extraction of data and quality assessments were performed by two observers. RESULTS: Five hundred full text publications were retrieved, 144 met the inclusion criteria, with 63 high quality studies. There were differences in study designs, topics studied, patient characteristics, and success measurements; therefore, only a systematic review could be conducted. Main 3D-techniques that are used in cleft lip and palate patients are CT, CBCT, MRI, stereophotogrammetry, and laser surface scanning. These techniques are mainly used for soft tissue analysis, evaluation of bone grafting, and changes in the craniofacial skeleton. Digital dental casts are used to evaluate treatment and changes over time. CONCLUSION: Available evidence implies that 3D imaging methods can be used for documentation of CLP patients. No data are available yet showing that 3D methods are more informative than conventional 2D methods

  15. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: II. Shear analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, J [Cardiovascular Research Group, Physics, University of New England, Armidale, NSW 2351 (Australia); Buick, J M [Mechanical and Design Engineering, Anglesea Building, Anglesea Road, University of Portsmouth, Portsmouth, PO1 3DJ (United Kingdom)

    2008-10-21

    Near-wall shear is known to be important in the pathogenesis and progression of atherosclerosis. In this paper, the shear field in a three-dimensional model of the human carotid artery is presented. The simulations are performed using the lattice Boltzmann model and are presented at six times of interest during a physiologically accurate velocity waveform. The near-wall shear rate and von Mises effective shear are also examined. Regions of low near-wall shear rates are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery. These are regions where low near-wall velocity and circulatory flows have been observed and are regions that are typically prone to atherosclerosis.

  16. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: II. Shear analysis

    International Nuclear Information System (INIS)

    Boyd, J; Buick, J M

    2008-01-01

    Near-wall shear is known to be important in the pathogenesis and progression of atherosclerosis. In this paper, the shear field in a three-dimensional model of the human carotid artery is presented. The simulations are performed using the lattice Boltzmann model and are presented at six times of interest during a physiologically accurate velocity waveform. The near-wall shear rate and von Mises effective shear are also examined. Regions of low near-wall shear rates are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery. These are regions where low near-wall velocity and circulatory flows have been observed and are regions that are typically prone to atherosclerosis.

  17. Analysis of glycoprotein-derived oligosaccharides in glycoproteins detected on two-dimensional gel by capillary electrophoresis using on-line concentration method.

    Science.gov (United States)

    Kamoda, Satoru; Nakanishi, Yasuharu; Kinoshita, Mitsuhiro; Ishikawa, Rika; Kakehi, Kazuaki

    2006-02-17

    Capillary electrophoresis (CE) is an effective tool to analyze carbohydrate mixture derived from glycoproteins with high resolution. However, CE has a disadvantage that a few nanoliters of a sample solution are injected to a narrow capillary. Therefore, we have to prepare a sample solution of high concentration for CE analysis. In the present study, we applied head column field-amplified sample stacking method to the analysis of N-linked oligosaccharides derived from glycoprotein separated by two-dimensional gel electrophoresis. Model studies demonstrated that we achieved 60-360 times concentration effect on the analysis of carbohydrate chains labeled with 3-aminobenzoic acid (3-AA). The method was applied to the analysis of N-linked oligosaccharides from glycoproteins separated and detected on PAGE gel. Heterogeneity of alpha1-acid glycoprotein (AGP), i.e. glycoforms, was examined by 2D-PAGE and N-linked oligosaccharides were released by in-gel digestion with PNGase F. The released oligosaccharides were derivatized with 3-AA and analyzed by CE. The results showed that glycoforms having lower pI values contained a larger amount of tetra- and tri-antennary oligosaccharides. In contrast, glycoforms having higher pI values contained bi-antennary oligosaccharides abundantly. The result clearly indicated that the spot of a glycoprotein glycoform detected by Coomassie brilliant blue staining on 2D-PAGE gel is sufficient for quantitative profiling of oligosaccharides.

  18. The location of midfacial landmarks according to the method of establishing the midsagittal reference plane in three-dimensional computed tomography analysis of facial asymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Sun; Lee, Eun Joo; Lee, Jae Seo; Kang, Byung Cheock; Yoon, Suk Ja [Dental Science Research Institute, Chonnam National University, Gwangju (Korea, Republic of); Song, In Ja [Dept. of Nursing, Kwangju Women' s University, Gwangju (Korea, Republic of)

    2015-12-15

    The purpose of this study was to evaluate the influence of methods of establishing the midsagittal reference plane (MRP) on the locations of midfacial landmarks in the three-dimensional computed tomography (CT) analysis of facial asymmetry. A total of 24 patients (12 male and 12 female; mean age, 22.5 years; age range, 18.2-29.7 years) with facial asymmetry were included in this study. The MRP was established using two different methods on each patient's CT image. The x-coordinates of four midfacial landmarks (the menton, nasion, upper incisor, and lower incisor) were obtained by measuring the distance and direction of the landmarks from the MRP, and the two methods were compared statistically. The direction of deviation and the severity of asymmetry found using each method were also compared. The x-coordinates of the four anatomic landmarks all showed a statistically significant difference between the two methods of establishing the MRP. For the nasion and lower incisor, six patients (25.0%) showed a change in the direction of deviation. The severity of asymmetry also changed in 16 patients (66.7%). The results of this study suggest that the locations of midfacial landmarks change significantly according to the method used to establish the MRP.

  19. A comparative study of three commonly used two-dimensional overlay generation methods in bite mark analysis

    OpenAIRE

    Pajnigara, Nilufer Gev; Balpande, Apeksha S; Motwani, Mukta B; Choudhary, Anuraag; Thakur, Samantha; Pajnigara, Natasha G

    2017-01-01

    Aim: The present study attempts to compare the bite mark overlays generated by three different methods. The objectives of the study were to compare the three commonly used techniques for overlay generation and to evaluate the interobserver reliability in assessing bite marks by these methods. Materials and Methods: Overlays were produced from the biting surfaces of six upper and six lower anterior teeth of 30 dental study models using the following three methods: (a) Hand tracing from wax imp...

  20. An Optimized Trichloroacetic Acid/Acetone Precipitation Method for Two-Dimensional Gel Electrophoresis Analysis of Qinchuan Cattle Longissimus Dorsi Muscle Containing High Proportion of Marbling.

    Directory of Open Access Journals (Sweden)

    Ruijie Hao

    Full Text Available Longissimus dorsi muscle (LD proteomics provides a novel opportunity to reveal the molecular mechanism behind intramuscular fat deposition. Unfortunately, the vast amounts of lipids and nucleic acids in this tissue hampered LD proteomics analysis. Trichloroacetic acid (TCA/acetone precipitation is a widely used method to remove contaminants from protein samples. However, the high speed centrifugation employed in this method produces hard precipitates, which restrict contaminant elimination and protein re-dissolution. To address the problem, the centrifugation precipitates were first grinded with a glass tissue grinder and then washed with 90% acetone (TCA/acetone-G-W in the present study. According to our result, the treatment for solid precipitate facilitated non-protein contaminant removal and protein re-dissolution, ultimately improving two-dimensional gel electrophoresis (2-DE analysis. Additionally, we also evaluated the effect of sample drying on 2-DE profile as well as protein yield. It was found that 30 min air-drying did not result in significant protein loss, but reduced horizontal streaking and smearing on 2-DE gel compared to 10 min. In summary, we developed an optimized TCA/acetone precipitation method for protein extraction of LD, in which the modifications improved the effectiveness of TCA/acetone method.

  1. An Optimized Trichloroacetic Acid/Acetone Precipitation Method for Two-Dimensional Gel Electrophoresis Analysis of Qinchuan Cattle Longissimus Dorsi Muscle Containing High Proportion of Marbling.

    Science.gov (United States)

    Hao, Ruijie; Adoligbe, Camus; Jiang, Bijie; Zhao, Xianlin; Gui, Linsheng; Qu, Kaixing; Wu, Sen; Zan, Linsen

    2015-01-01

    Longissimus dorsi muscle (LD) proteomics provides a novel opportunity to reveal the molecular mechanism behind intramuscular fat deposition. Unfortunately, the vast amounts of lipids and nucleic acids in this tissue hampered LD proteomics analysis. Trichloroacetic acid (TCA)/acetone precipitation is a widely used method to remove contaminants from protein samples. However, the high speed centrifugation employed in this method produces hard precipitates, which restrict contaminant elimination and protein re-dissolution. To address the problem, the centrifugation precipitates were first grinded with a glass tissue grinder and then washed with 90% acetone (TCA/acetone-G-W) in the present study. According to our result, the treatment for solid precipitate facilitated non-protein contaminant removal and protein re-dissolution, ultimately improving two-dimensional gel electrophoresis (2-DE) analysis. Additionally, we also evaluated the effect of sample drying on 2-DE profile as well as protein yield. It was found that 30 min air-drying did not result in significant protein loss, but reduced horizontal streaking and smearing on 2-DE gel compared to 10 min. In summary, we developed an optimized TCA/acetone precipitation method for protein extraction of LD, in which the modifications improved the effectiveness of TCA/acetone method.

  2. Explorative data analysis of two-dimensional electrophoresis gels

    DEFF Research Database (Denmark)

    Schultz, J.; Gottlieb, D.M.; Petersen, Marianne Kjerstine

    2004-01-01

    Methods for classification of two-dimensional (2-DE) electrophoresis gels based on multivariate data analysis are demonstrated. Two-dimensional gels of ten wheat varieties are analyzed and it is demonstrated how to classify the wheat varieties in two qualities and a method for initial screening...

  3. Nose burns: 4-dimensional analysis.

    Science.gov (United States)

    Bouguila, J; Ho Quoc, C; Viard, R; Brun, A; Voulliaume, D; Comparin, J-P; Foyatier, J-L

    2017-10-01

    The nose is the central organ of the face. It has two essential roles, aesthetic and breathing. It is often seriously damaged in the context of facial burns, causing grotesque facial disfigurement. As this disfigurement is visible on frontal and profile views, the patient suffers both socially and psychologically. The nose is a three-dimensional organ. Reconstruction is therefore more difficult and needs to be more precise than in other parts of the face. Maintaining symmetry, contour and function are essential for successful nasal reconstruction. Multiple factors determine the optimal method of reconstruction, including the size of the defect, its depth and its site. Satisfactory social life is recovered only after multiple surgical procedures and long-term rehabilitation and physiotherapy. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  4. Three-dimensional patterning methods and related devices

    Energy Technology Data Exchange (ETDEWEB)

    Putnam, Morgan C.; Kelzenberg, Michael D.; Atwater, Harry A.; Boettcher, Shannon W.; Lewis, Nathan S.; Spurgeon, Joshua M.; Turner-Evans, Daniel B.; Warren, Emily L.

    2016-12-27

    Three-dimensional patterning methods of a three-dimensional microstructure, such as a semiconductor wire array, are described, in conjunction with etching and/or deposition steps to pattern the three-dimensional microstructure.

  5. Dimensional analysis and group theory in astrophysics

    CERN Document Server

    Kurth, Rudolf

    2013-01-01

    Dimensional Analysis and Group Theory in Astrophysics describes how dimensional analysis, refined by mathematical regularity hypotheses, can be applied to purely qualitative physical assumptions. The book focuses on the continuous spectral of the stars and the mass-luminosity relationship. The text discusses the technique of dimensional analysis, covering both relativistic phenomena and the stellar systems. The book also explains the fundamental conclusion of dimensional analysis, wherein the unknown functions shall be given certain specified forms. The Wien and Stefan-Boltzmann Laws can be si

  6. Gene-Gene Interaction Analysis for the Accelerated Failure Time Model Using a Unified Model-Based Multifactor Dimensionality Reduction Method.

    Science.gov (United States)

    Lee, Seungyeoun; Son, Donghee; Yu, Wenbao; Park, Taesung

    2016-12-01

    Although a large number of genetic variants have been identified to be associated with common diseases through genome-wide association studies, there still exits limitations in explaining the missing heritability. One approach to solving this missing heritability problem is to investigate gene-gene interactions, rather than a single-locus approach. For gene-gene interaction analysis, the multifactor dimensionality reduction (MDR) method has been widely applied, since the constructive induction algorithm of MDR efficiently reduces high-order dimensions into one dimension by classifying multi-level genotypes into high- and low-risk groups. The MDR method has been extended to various phenotypes and has been improved to provide a significance test for gene-gene interactions. In this paper, we propose a simple method, called accelerated failure time (AFT) UM-MDR, in which the idea of a unified model-based MDR is extended to the survival phenotype by incorporating AFT-MDR into the classification step. The proposed AFT UM-MDR method is compared with AFT-MDR through simulation studies, and a short discussion is given.

  7. Gene-Gene Interaction Analysis for the Accelerated Failure Time Model Using a Unified Model-Based Multifactor Dimensionality Reduction Method

    Directory of Open Access Journals (Sweden)

    Seungyeoun Lee

    2016-12-01

    Full Text Available Although a large number of genetic variants have been identified to be associated with common diseases through genome-wide association studies, there still exits limitations in explaining the missing heritability. One approach to solving this missing heritability problem is to investigate gene-gene interactions, rather than a single-locus approach. For gene-gene interaction analysis, the multifactor dimensionality reduction (MDR method has been widely applied, since the constructive induction algorithm of MDR efficiently reduces high-order dimensions into one dimension by classifying multi-level genotypes into high- and low-risk groups. The MDR method has been extended to various phenotypes and has been improved to provide a significance test for gene-gene interactions. In this paper, we propose a simple method, called accelerated failure time (AFT UM-MDR, in which the idea of a unified model-based MDR is extended to the survival phenotype by incorporating AFT-MDR into the classification step. The proposed AFT UM-MDR method is compared with AFT-MDR through simulation studies, and a short discussion is given.

  8. A new automated method for analysis of gated-SPECT images based on a three-dimensional heart shaped model

    DEFF Research Database (Denmark)

    Lomsky, Milan; Richter, Jens; Johansson, Lena

    2005-01-01

    A new automated method for quantification of left ventricular function from gated-single photon emission computed tomography (SPECT) images has been developed. The method for quantification of cardiac function (CAFU) is based on a heart shaped model and the active shape algorithm. The model....... In the patient group the EDV calculated using QGS and CAFU showed good agreement for large hearts and higher CAFU values compared with QGS for the smaller hearts. In the larger hearts, ESV was much larger for QGS than for CAFU both in the phantom and patient studies. In the smallest hearts there was good...

  9. A roadmap to multifactor dimensionality reduction methods

    Science.gov (United States)

    Gola, Damian; Mahachie John, Jestinah M.; van Steen, Kristel

    2016-01-01

    Complex diseases are defined to be determined by multiple genetic and environmental factors alone as well as in interactions. To analyze interactions in genetic data, many statistical methods have been suggested, with most of them relying on statistical regression models. Given the known limitations of classical methods, approaches from the machine-learning community have also become attractive. From this latter family, a fast-growing collection of methods emerged that are based on the Multifactor Dimensionality Reduction (MDR) approach. Since its first introduction, MDR has enjoyed great popularity in applications and has been extended and modified multiple times. Based on a literature search, we here provide a systematic and comprehensive overview of these suggested methods. The methods are described in detail, and the availability of implementations is listed. Most recent approaches offer to deal with large-scale data sets and rare variants, which is why we expect these methods to even gain in popularity. PMID:26108231

  10. On progress of the solution of the stationary 2-dimensional neutron diffusion equation: a polynomial approximation method with error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ceolin, C., E-mail: celina.ceolin@gmail.com [Universidade Federal de Santa Maria (UFSM), Frederico Westphalen, RS (Brazil). Centro de Educacao Superior Norte; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T., E-mail: celina.ceolin@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2015-07-01

    Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)

  11. Finite element method analysis of band gap and transmission of two-dimensional metallic photonic crystals at terahertz frequencies.

    Science.gov (United States)

    Degirmenci, Elif; Landais, Pascal

    2013-10-20

    Photonic band gap and transmission characteristics of 2D metallic photonic crystals at THz frequencies have been investigated using finite element method (FEM). Photonic crystals composed of metallic rods in air, in square and triangular lattice arrangements, are considered for transverse electric and transverse magnetic polarizations. The modes and band gap characteristics of metallic photonic crystal structure are investigated by solving the eigenvalue problem over a unit cell of the lattice using periodic boundary conditions. A photonic band gap diagram of dielectric photonic crystal in square lattice array is also considered and compared with well-known plane wave expansion results verifying our FEM approach. The photonic band gap designs for both dielectric and metallic photonic crystals are consistent with previous studies obtained by different methods. Perfect match is obtained between photonic band gap diagrams and transmission spectra of corresponding lattice structure.

  12. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  13. Assessment on the leakage hazard of landfill leachate using three-dimensional excitation-emission fluorescence and parallel factor analysis method.

    Science.gov (United States)

    Pan, Hongwei; Lei, Hongjun; Liu, Xin; Wei, Huaibin; Liu, Shufang

    2017-09-01

    A large number of simple and informal landfills exist in developing countries, which pose as tremendous soil and groundwater pollution threats. Early warning and monitoring of landfill leachate pollution status is of great importance. However, there is a shortage of affordable and effective tools and methods. In this study, a soil column experiment was performed to simulate the pollution status of leachate using three-dimensional excitation-emission fluorescence (3D-EEMF) and parallel factor analysis (PARAFAC) models. Sum of squared residuals (SSR) and principal component analysis (PCA) were used to determine the optimal components for PARAFAC. A one-way analysis of variance showed that the component scores of the soil column leachate were significant influenced by landfill leachate (plandfill to that of natural soil could be used to evaluate the leakage status of landfill leachate. Furthermore, a hazard index (HI) and a hazard evaluation standard were established. A case study of Kaifeng landfill indicated a low hazard (level 5) by the use of HI. In summation, HI is presented as a tool to evaluate landfill pollution status and for the guidance of municipal solid waste management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Errors in using two dimensional methods for ergonomic assessment of motion in three dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Hollerbach, K.; Van Vorhis, R.L. [Lawrence Livermore National Lab., CA (United States); Hollister, A. [Louisiana State Univ., Shreveport, LA (United States)

    1996-03-01

    Wrist posture and rapid wrist movements are risk factors for work related musculoskeletal disorders. Measurement studies frequently involve optoelectronic methods in which markers are placed on the subject`s hand and wrist and the trajectories of the markers are tracked in three dimensional space. A goal of wrist posture measurements is to quantitatively establish wrist posture orientation. Accuracy and fidelity of the measurement data with respect to kinematic mechanisms are essential in wrist motion studies. Fidelity with the physical kinematic mechanism can be limited by the choice of kinematic modeling techniques and the representation of motion. Frequently, ergonomic studies involving wrist kinematics make use of two dimensional measurement and analysis techniques. Two dimensional measurement of human joint motion involves the analysis of three dimensional displacements in an obersver selected measurement plane. Accurate marker placement and alignment of joint motion plane with the observer plane are difficult. In nature, joint axes can exist at any orientation and location relative to an arbitrarily chosen global reference frame. An arbitrary axis is any axis that is not coincident with a reference coordinate. We calculate the errors that result from measuring joint motion about an arbitrary axis using two dimensional methods.

  15. Study on prestressed concrete reactor vessel structures. II-5: Crack analysis by three dimensional finite elements method of 1/20 multicavity type PCRV subjected to internal pressure

    Science.gov (United States)

    1978-01-01

    A three-dimensional finite elements analysis is reported of the nonlinear behavior of PCRV subjected to internal pressure by comparing calculated results with test results. As the first stage, an analysis considering the nonlinearity of cracking in concrete was attempted. As a result, it is found possible to make an analysis up to three times the design pressure (50 kg/sqcm), and calculated results agree well with test results.

  16. Enhancing Dimensionality Reduction Methods for Side-Channel Attacks

    OpenAIRE

    Cagli, Eleonora; Dumas, Cécile; Prouff, Emmanuel

    2015-01-01

    International audience; Advanced Side-Channel Analyses make use of dimensionality reduction techniques to reduce both the memory and timing complexity of the attacks. The most popular methods to effectuate such a reduction are the Principal Component Analysis (PCA) and the Linear Discrim-inant Analysis (LDA). They indeed lead to remarkable efficiency gains but their use in side-channel context also raised some issues. The PCA provides a set of vectors (the principal components) onto which pro...

  17. Development and validation of a comprehensive two-dimensional gas chromatography-mass spectrometry method for the analysis of phytosterol oxidation products in human plasma

    NARCIS (Netherlands)

    Menéndez-Carreño, M.; Steenbergen, H.; Janssen, H.-G.

    2012-01-01

    Phytosterol oxidation products (POPs) have been suggested to exert adverse biological effects similar to, although less severe than, their cholesterol counterparts. For that reason, their analysis in human plasma is highly relevant. Comprehensive two-dimensional gas chromatography (GC×GC) coupled

  18. Dimensional analysis, scaling and fractals

    International Nuclear Information System (INIS)

    Timm, L.C.; Reichardt, K.; Oliveira Santos Bacchi, O.

    2004-01-01

    Dimensional analysis refers to the study of the dimensions that characterize physical entities, like mass, force and energy. Classical mechanics is based on three fundamental entities, with dimensions MLT, the mass M, the length L and the time T. The combination of these entities gives rise to derived entities, like volume, speed and force, of dimensions L 3 , LT -1 , MLT -2 , respectively. In other areas of physics, four other fundamental entities are defined, among them the temperature θ and the electrical current I. The parameters that characterize physical phenomena are related among themselves by laws, in general of quantitative nature, in which they appear as measures of the considered physical entities. The measure of an entity is the result of its comparison with another one, of the same type, called unit. Maps are also drawn in scale, for example, in a scale of 1:10,000, 1 cm 2 of paper can represent 10,000 m 2 in the field. Entities that differ in scale cannot be compared in a simple way. Fractal geometry, in contrast to the Euclidean geometry, admits fractional dimensions. The term fractal is defined in Mandelbrot (1982) as coming from the Latin fractus, derived from frangere which signifies to break, to form irregular fragments. The term fractal is opposite to the term algebra (from the Arabic: jabara) which means to join, to put together the parts. For Mandelbrot, fractals are non topologic objects, that is, objects which have as their dimension a real, non integer number, which exceeds the topologic dimension. For the topologic objects, or Euclidean forms, the dimension is an integer (0 for the point, 1 for a line, 2 for a surface, and 3 for a volume). The fractal dimension of Mandelbrot is a measure of the degree of irregularity of the object under consideration. It is related to the speed by which the estimate of the measure of an object increases as the measurement scale decreases. An object normally taken as uni-dimensional, like a piece of a

  19. Fast and accurate finite element analysis of large-scale three-dimensional photonic devices with a robust domain decomposition method.

    Science.gov (United States)

    Xue, Ming-Feng; Kang, Young Mo; Arbabi, Amir; McKeown, Steven J; Goddard, Lynford L; Jin, Jian-Ming

    2014-02-24

    A fast and accurate full-wave technique based on the dual-primal finite element tearing and interconnecting method and the second-order transmission condition is presented for large-scale three-dimensional photonic device simulations. The technique decomposes a general three-dimensional electromagnetic problem into smaller subdomain problems so that parallel computing can be performed on distributed-memory computer clusters to reduce the simulation time significantly. With the electric fields computed everywhere, photonic device parameters such as transmission and reflection coefficients are extracted. Several photonic devices, with simulation volumes up to 1.9×10(4) (λ/n(avg))3 and modeled with over one hundred million unknowns, are simulated to demonstrate the application, efficiency, and capability of this technique. The simulations show good agreement with experimental results and in a special case with a simplified two-dimensional simulation.

  20. A roadmap to multifactor dimensionality reduction methods.

    Science.gov (United States)

    Gola, Damian; Mahachie John, Jestinah M; van Steen, Kristel; König, Inke R

    2016-03-01

    Complex diseases are defined to be determined by multiple genetic and environmental factors alone as well as in interactions. To analyze interactions in genetic data, many statistical methods have been suggested, with most of them relying on statistical regression models. Given the known limitations of classical methods, approaches from the machine-learning community have also become attractive. From this latter family, a fast-growing collection of methods emerged that are based on the Multifactor Dimensionality Reduction (MDR) approach. Since its first introduction, MDR has enjoyed great popularity in applications and has been extended and modified multiple times. Based on a literature search, we here provide a systematic and comprehensive overview of these suggested methods. The methods are described in detail, and the availability of implementations is listed. Most recent approaches offer to deal with large-scale data sets and rare variants, which is why we expect these methods to even gain in popularity. © The Author 2015. Published by Oxford University Press.

  1. Serial 3-dimensional computed tomography and a novel method of volumetric analysis for the evaluation of the osteo-odonto-keratoprosthesis.

    Science.gov (United States)

    Sipkova, Zuzana; Lam, Fook Chang; Francis, Ian; Herold, Jim; Liu, Christopher

    2013-04-01

    To assess the use of serial computed tomography (CT) in the detection of osteo-odonto-lamina resorption in osteo-odonto-keratoprosthesis (OOKP) and to investigate the use of new volumetric software, Advanced Lung Analysis software (3D-ALA; GE Healthcare), for detecting changes in OOKP laminar volume. A retrospective assessment of the radiological databases and hospital records was performed for 22 OOKP patients treated at the National OOKP referral center in Brighton, United Kingdom. Three-dimensional surface reconstructions of the OOKP laminae were performed using stored CT data. For the 2-dimensional linear analysis, the linear dimensions of the reconstructed laminae were measured, compared with original measurements taken at the time of surgery, and then assigned a CT grade based on a predetermined resorption grading scale. The volumetric analysis involved calculating the laminar volumes using 3D-ALA. The effectiveness of 2-dimensional linear analysis, volumetric analysis, and clinical examination in detecting laminar resorption was compared. The mean change in laminar volume between the first and second scans was -6.67% (range, +10.13% to -24.86%). CT grades assigned to patients based on laminar dimension measurements remained the same, despite significant changes in laminar volumes. Clinical examination failed to identify 60% of patients who were found to have resorption on volumetric analysis. Currently, the detection of laminar resorption relies on clinical examination and the measurement of laminar dimensions on the 2- and 3-dimensional radiological images. Laminar volume measurement is a useful new addition to the armamentarium. It provides an objective tool that allows for a precise and reproducible assessment of laminar resorption.

  2. Two-dimensional multifractal cross-correlation analysis

    International Nuclear Information System (INIS)

    Xi, Caiping; Zhang, Shuning; Xiong, Gang; Zhao, Huichang; Yang, Yonghong

    2017-01-01

    Highlights: • We study the mathematical models of 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Present the definition of the two-dimensional N 2 -partitioned multiplicative cascading process. • Do the comparative analysis of 2D-MC by 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Provide a reference on the choice and parameter settings of these methods in practice. - Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. This paper presents two-dimensional multifractal cross-correlation analysis based on the partition function (2D-MFXPF), two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) and two-dimensional multifractal cross-correlation analysis based on the detrended moving average analysis (2D-MFXDMA). We apply these methods to pairs of two-dimensional multiplicative cascades (2D-MC) to do a comparative study. Then, we apply the two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) to real images and unveil intriguing multifractality in the cross correlations of the material structures. At last, we give the main conclusions and provide a valuable reference on how to choose the multifractal algorithms in the potential applications in the field of SAR image classification and detection.

  3. Comprehensive two-dimensional gas chromatography-mass spectrometry analysis of volatile constituents in Thai vetiver root oils obtained by using different extraction methods.

    Science.gov (United States)

    Pripdeevech, Patcharee; Wongpornchai, Sugunya; Marriott, Philip J

    2010-01-01

    Vetiver root oil is known as one of the finest fixatives used in perfumery. This highly complex oil contains more than 200 components, which are mainly sesquiterpene hydrocarbons and their oxygenated derivatives. Since conventional GC-MS has limitation in terms of separation efficiency, the comprehensive two-dimensional GC-MS (GC x GC-MS) was proposed in this study as an alternative technique for the analysis of vetiver oil constituents. To evaluate efficiency of the hyphenated GC x GC-MS technique in terms of separation power and sensitivity prior to identification and quantitation of the volatile constituents in a variety of vetiver root oil samples. METHODOLOGY. Dried roots of Vetiveria zizanioides were subjected to extraction using various conditions of four different methods; simultaneous steam distillation, supercritical fluid, microwave-assisted, and Soxhlet extraction. Volatile components in all vetiver root oil samples were separated and identified by GC-MS and GC x GC-MS. The relative contents of volatile constituents in each vetiver oil sample were calculated using the peak volume normalization method. Different techniques of extraction had diverse effects on yield, physical and chemical properties of the vetiver root oils obtained. Overall, 64 volatile constituents were identified by GC-MS. Among the 245 well-resolved individual components obtained by GC x GC-MS, the additional identification of 43 more volatiles was achieved. In comparison with GC-MS, GC x GC-MS showed greater ability to differentiate the quality of essential oils obtained from diverse extraction conditions in terms of their volatile compositions and contents.

  4. Method for Parametric Design of Three-Dimensional Shapes

    National Research Council Canada - National Science Library

    Dick, James L

    2006-01-01

    The present invention relates to computer-aided design of three-dimensional shapes and more particularly, relates to a system and method for parametric design of three-dimensional hydrodynamic shapes...

  5. Algorithmic dimensionality reduction for molecular structure analysis.

    Science.gov (United States)

    Brown, W Michael; Martin, Shawn; Pollock, Sara N; Coutsias, Evangelos A; Watson, Jean-Paul

    2008-08-14

    Dimensionality reduction approaches have been used to exploit the redundancy in a Cartesian coordinate representation of molecular motion by producing low-dimensional representations of molecular motion. This has been used to help visualize complex energy landscapes, to extend the time scales of simulation, and to improve the efficiency of optimization. Until recently, linear approaches for dimensionality reduction have been employed. Here, we investigate the efficacy of several automated algorithms for nonlinear dimensionality reduction for representation of trans, trans-1,2,4-trifluorocyclo-octane conformation--a molecule whose structure can be described on a 2-manifold in a Cartesian coordinate phase space. We describe an efficient approach for a deterministic enumeration of ring conformations. We demonstrate a drastic improvement in dimensionality reduction with the use of nonlinear methods. We discuss the use of dimensionality reduction algorithms for estimating intrinsic dimensionality and the relationship to the Whitney embedding theorem. Additionally, we investigate the influence of the choice of high-dimensional encoding on the reduction. We show for the case studied that, in terms of reconstruction error root mean square deviation, Cartesian coordinate representations and encodings based on interatom distances provide better performance than encodings based on a dihedral angle representation.

  6. Algorithmic dimensionality reduction for molecular structure analysis

    Science.gov (United States)

    Brown, W. Michael; Martin, Shawn; Pollock, Sara N.; Coutsias, Evangelos A.; Watson, Jean-Paul

    2008-01-01

    Dimensionality reduction approaches have been used to exploit the redundancy in a Cartesian coordinate representation of molecular motion by producing low-dimensional representations of molecular motion. This has been used to help visualize complex energy landscapes, to extend the time scales of simulation, and to improve the efficiency of optimization. Until recently, linear approaches for dimensionality reduction have been employed. Here, we investigate the efficacy of several automated algorithms for nonlinear dimensionality reduction for representation of trans, trans-1,2,4-trifluorocyclo-octane conformation—a molecule whose structure can be described on a 2-manifold in a Cartesian coordinate phase space. We describe an efficient approach for a deterministic enumeration of ring conformations. We demonstrate a drastic improvement in dimensionality reduction with the use of nonlinear methods. We discuss the use of dimensionality reduction algorithms for estimating intrinsic dimensionality and the relationship to the Whitney embedding theorem. Additionally, we investigate the influence of the choice of high-dimensional encoding on the reduction. We show for the case studied that, in terms of reconstruction error root mean square deviation, Cartesian coordinate representations and encodings based on interatom distances provide better performance than encodings based on a dihedral angle representation. PMID:18715062

  7. Algorithmic dimensionality reduction for molecular structure analysis

    Science.gov (United States)

    Brown, W. Michael; Martin, Shawn; Pollock, Sara N.; Coutsias, Evangelos A.; Watson, Jean-Paul

    2008-08-01

    Dimensionality reduction approaches have been used to exploit the redundancy in a Cartesian coordinate representation of molecular motion by producing low-dimensional representations of molecular motion. This has been used to help visualize complex energy landscapes, to extend the time scales of simulation, and to improve the efficiency of optimization. Until recently, linear approaches for dimensionality reduction have been employed. Here, we investigate the efficacy of several automated algorithms for nonlinear dimensionality reduction for representation of trans, trans-1,2,4-trifluorocyclo-octane conformation-a molecule whose structure can be described on a 2-manifold in a Cartesian coordinate phase space. We describe an efficient approach for a deterministic enumeration of ring conformations. We demonstrate a drastic improvement in dimensionality reduction with the use of nonlinear methods. We discuss the use of dimensionality reduction algorithms for estimating intrinsic dimensionality and the relationship to the Whitney embedding theorem. Additionally, we investigate the influence of the choice of high-dimensional encoding on the reduction. We show for the case studied that, in terms of reconstruction error root mean square deviation, Cartesian coordinate representations and encodings based on interatom distances provide better performance than encodings based on a dihedral angle representation.

  8. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  9. Development of seismic design method for piping system supported by elastoplastic damper. 3. Vibration test of three-dimensional piping model and its response analysis

    International Nuclear Information System (INIS)

    Namita, Yoshio; Kawahata, Jun-ichi; Ichihashi, Ichiro; Fukuda, Toshihiko.

    1995-01-01

    Component and piping systems in current nuclear power plants and chemical plants are designed to employ many supports to maintain safety and reliability against earthquakes. However, these supports are rigid and have a slight energy-dissipating effect. It is well known that applying high-damping supports to the piping system is very effective for reducing the seismic response. In this study, we investigated the design method of the elastoplastic damper [energy absorber (EAB)] and the seismic design method for a piping system supported by the EAB. Our final goal is to develop technology for applying the EAB to the piping system of an actual plant. In this paper, the vibration test results of the three-dimensional piping model are presented. From the test results, it is confirmed that EAB has a large energy-dissipating effect and is effective in reducing the seismic response of the piping system, and that the seismic design method for the piping system, which is the response spectrum mode superposition method using each modal damping and requires iterative calculation of EAB displacement, is applicable for the three-dimensional piping model. (author)

  10. Dimensionality reduction methods in virtual metrology

    Science.gov (United States)

    Zeng, Dekong; Tan, Yajing; Spanos, Costas J.

    2008-03-01

    The objective of this work is the creation of predictive models that can forecast the electrical or physical parameters of wafers using data collected from the relevant processing tools. In this way, direct measurements from the wafer can be minimized or eliminated altogether, hence the term "virtual" metrology. Challenges include the selection of the appropriate process step to monitor, the pre-treatment of the raw data, and the deployment of a Virtual Metrology Model (VMM) that can track a manufacturing process as it ages. A key step in any VM application is dimensionality reduction, i.e. ensuring that the proper subset of predictors is included in the model. In this paper, a software tool developed with MATLAB is demonstrated for interactive data prescreening and selection. This is combined with a variety of automated statistical techniques. These include step-wise regression and genetic selection in conjunction with linear modeling such as Principal Component Regression (PCR) and Partial Least Squares (PLS). Modeling results based on industrial datasets are used to demonstrate the effectiveness of these methods.

  11. Two-dimensional gel electrophoresis analysis of different parts of ...

    African Journals Online (AJOL)

    Two-dimensional gel electrophoresis analysis of different parts of Panax quinquefolius L. root. ... From these results it was concluded that proteomic analysis method was an effective way to identify the different parts of quinquefolius L. root. These findings may contribute to further understanding of the physiological ...

  12. Two-dimensional signal analysis

    CERN Document Server

    Garello, René

    2010-01-01

    This title sets out to show that 2-D signal analysis has its own role to play alongside signal processing and image processing.Concentrating its coverage on those 2-D signals coming from physical sensors (such as radars and sonars), the discussion explores a 2-D spectral approach but develops the modeling of 2-D signals and proposes several data-oriented analysis techniques for dealing with them. Coverage is also given to potential future developments in this area.

  13. Dimensional analysis beyond the Pi theorem

    CERN Document Server

    Zohuri, Bahman

    2017-01-01

    Dimensional Analysis and Physical Similarity are well understood subjects, and the general concepts of dynamical similarity are explained in this book. Our exposition is essentially different from those available in the literature, although it follows the general ideas known as Pi Theorem. There are many excellent books that one can refer to; however, dimensional analysis goes beyond Pi theorem, which is also known as Buckingham’s Pi Theorem. Many techniques via self-similar solutions can bound solutions to problems that seem intractable. A time-developing phenomenon is called self-similar if the spatial distributions of its properties at different points in time can be obtained from one another by a similarity transformation, and identifying one of the independent variables as time. However, this is where Dimensional Analysis goes beyond Pi Theorem into self-similarity, which has represented progress for researchers. In recent years there has been a surge of interest in self-similar solutions of the First ...

  14. Dimensional analysis examples of the use of symmetry

    CERN Document Server

    Hornung, Hans G

    2006-01-01

    Derived from a course in fluid mechanics, this text for advanced undergraduates and beginning graduate students employs symmetry arguments to demonstrate the principles of dimensional analysis. The examples provided illustrate the effectiveness of symmetry arguments in obtaining the mathematical form of the functions yielded by dimensional analysis. Students will find these methods applicable to a wide field of interests.After discussing several examples of method, the text examines pipe flow, material properties, gasdynamical examples, body in nonuniform flow, and turbulent flow. Additional t

  15. A method for fabricating a three-dimensional carbon structure

    DEFF Research Database (Denmark)

    2017-01-01

    A method for fabricating a three-dimensional carbon structure (4) is disclosed. A mould (1) defining a three-dimensional shape is provided, and natural protein containing fibres are packed in the mould (1) at a predetermined packing density. The packed natural protein containing fibre structure (3......) undergoes pyrolysis, either while still in the mould (1) or after having been removed from the mould (1). Thereby a three-dimensional porous and electrically conducting carbon structure (4) having a three-dimensional shape defined by the three-dimensional shape of the mould (1) and a porosity defined...

  16. Three-dimensional reconstruction of an in-situ Miocene peat forest from the lower Rhine Embayment, northwestern Germany - new methods in palaeovegetation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mosbrugger, V.; Gee, C.T.; Belz, G.; Ashraf, A.R. (University of Tubingen, Tubingen (Germany). Inst. of Geology and Palaeontology)

    1994-08-01

    New techniques have been developed for the analysis of stump horizons that result in a relatively detailed three-dimensional reconstruction of ancient forests. In addition, a rough estimate of their above-ground standing biomass can be calculated. These techniques are applied to an in-situ Miocene peat forest preserved in the Lower Rhine Embayment, northwestern Germany. In a study area of 2500 m[sup 2], 476 stumps were mapped and used in the forest reconstruction. Additionally, pollen samples and leaf remains have been analysed. The peat forest consists primarily of conifers (in particular Taxodiaceae and Pinaceae) with Sciadopitys being the most common genus. The only angiosperms in the wood flora were palms, but in the pollen flora, evidence for the Myricaceae, Mastixiaceae, Ericaceae and a few other angiosperms is also present. The forest was relatively dense with 1904 trees/ha and a basal area of 164 m[sup 2]. Mean trunk diameter was 28 cm, while mean tree height is calculated to have been 9.9 m. Estimated above ground biomass is 750 t/ha, but this value also includes dead or partly dead trees. This peat forest does not closely compare with previous reconstructions of Miocene peat forests. Its three dimensional structure and biomass differ from those of modern bald cypress swamps.

  17. Introducing fluid dynamics using dimensional analysis

    DEFF Research Database (Denmark)

    Jensen, Jens Højgaard

    2013-01-01

    Many aspects of fluid dynamics can be introduced using dimensional analysis, combined with some basic physical principles. This approach is concise and allows exploration of both the laminar and turbulent limits—including important phenomena that are not normally dealt with when fluid dynamics...

  18. Dimensional Analysis as a Trading Card Game.

    Science.gov (United States)

    Machacek, A. C.

    2001-01-01

    Describes a form of dimensional analysis based on a pack of cards for use with students aged 11-16. Presents new approaches to understanding the manipulation of simple equations, and also aids familiarization with the units involved and the relationships between them. Provides differentiation by altering the layout of the cards and varying the…

  19. A two-dimensional matrix image based feature extraction method for classification of sEMG: A comparative analysis based on SVM, KNN and RBF-NN.

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen

    2017-01-01

    The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.

  20. Three-dimensional display techniques: description and critique of methods

    International Nuclear Information System (INIS)

    Budinger, T.F.

    1982-01-01

    The recent advances in non invasive medical imaging of 3 dimensional spatial distribution of radionuclides, X-ray attenuation coefficients, and nuclear magnetic resonance parameters necessitate development of a general method for displaying these data. The objective of this paper is to give a systematic description and comparison of known methods for displaying three dimensional data. The discussion of display methods is divided into two major categories: 1) computer-graphics methods which use a two dimensional display screen; and 2) optical methods (such as holography, stereopsis and vari-focal systems)

  1. Utility of three-dimensional method for diagnosing meniscal lesions

    International Nuclear Information System (INIS)

    Ohshima, Suguru; Nomura, Kazutoshi; Hirano, Mako; Hashimoto, Noburo; Fukumoto, Tetsuya; Katahira, Kazuhiro

    1998-01-01

    MRI of the knee is a useful method for diagnosing meniscal tears. Although the spin echo method is usually used for diagnosing meniscal tears, we examined the utility of thin slice scan with the three-dimensional method. We reviewed 70 menisci in which arthroscopic findings were confirmed. In this series, sensitivity was 90.9% for medial meniscal injuries and 68.8% for lateral meniscal injuries. There were 3 meniscal tears in which we could not detect tears on preoperative MRI. We could find tears in two of these cases when re-evaluated using the same MRI. In conclusion, we can get the same diagnostic rate with the three-dimensional method compared with the spin echo method. Scan time of the three-dimensional method is 3 minutes, on the other hand that of spin echo method in 17 minutes. This slice scan with three-dimensional method is useful for screening meniscal injuries before arthroscopy. (author)

  2. Dimensional Analysis in Physics and the Buckingham Theorem

    Science.gov (United States)

    Misic, Tatjana; Najdanovic-Lukic, Marina; Nesic, Ljubisa

    2010-01-01

    Dimensional analysis is a simple, clear and intuitive method for determining the functional dependence of physical quantities that are of importance to a certain process. However, in physics textbooks, very little space is usually given to this approach and it is often presented only as a diagnostic tool used to determine the validity of…

  3. Uni-dimensional double development HPTLC-densitometry method for simultaneous analysis of mangiferin and lupeol content in mango (Mangifera indica) pulp and peel during storage.

    Science.gov (United States)

    Jyotshna; Srivastava, Pooja; Killadi, Bharti; Shanker, Karuna

    2015-06-01

    Mango (Mangifera indica) fruit is one of the important commercial fruit crops of India. Similar to other tropical fruits it is also highly perishable in nature. During storage/ripening, changes in its physico-chemical quality parameters viz. firmness, titrable acidity, total soluble solid content (TSSC), carotenoids content, and other biochemicals are inevitable. A uni-dimensional double-development high-performance thin-layer chromatography (UDDD-HPTLC) method was developed for the real-time monitoring of mangiferin and lupeol in mango pulp and peel during storage. The quantitative determination of both compounds of different classes was achieved by densitometric HPTLC method. Silica gel 60F254 HPTLC plates and two solvent systems viz. toluene/EtOAC/MeOH and EtOAC/MeOH, respectively were used for optimum separation and selective evaluation. Densitometric quantitation of mangiferin was performed at 390nm, while lupeol at 610nm after post chromatographic derivatization. Validated method was used to real-time monitoring of mangiferin and lupeol content during storage in four Indian cultivars, e.g. Bombay green (Bgreen), Dashehari, Langra, and Chausa. Significant correlations (p<0.05) between of acidity and TSSC with mangiferin and lupeol in pulp and peel during storage were also observed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Method of dimensionality reduction in contact mechanics and friction

    CERN Document Server

    Popov, Valentin L

    2015-01-01

    This book describes for the first time a simulation method for the fast calculation of contact properties and friction between rough surfaces in a complete form. In contrast to existing simulation methods, the method of dimensionality reduction (MDR) is based on the exact mapping of various types of three-dimensional contact problems onto contacts of one-dimensional foundations. Within the confines of MDR, not only are three dimensional systems reduced to one-dimensional, but also the resulting degrees of freedom are independent from another. Therefore, MDR results in an enormous reduction of the development time for the numerical implementation of contact problems as well as the direct computation time and can ultimately assume a similar role in tribology as FEM has in structure mechanics or CFD methods, in hydrodynamics. Furthermore, it substantially simplifies analytical calculation and presents a sort of “pocket book edition” of the entirety contact mechanics. Measurements of the rheology of bodies in...

  5. Method of signal analysis

    International Nuclear Information System (INIS)

    Berthomier, Charles

    1975-01-01

    A method capable of handling the amplitude and the frequency time laws of a certain kind of geophysical signals is described here. This method is based upon the analytical signal idea of Gabor and Ville, which is constructed either in the time domain by adding an imaginary part to the real signal (in-quadrature signal), or in the frequency domain by suppressing negative frequency components. The instantaneous frequency of the initial signal is then defined as the time derivative of the phase of the analytical signal, and his amplitude, or envelope, as the modulus of this complex signal. The method is applied to three types of magnetospheric signals: chorus, whistlers and pearls. The results obtained by analog and numerical calculations are compared to results obtained by classical systems using filters, i.e. based upon a different definition of the concept of frequency. The precision with which the frequency-time laws are determined leads then to the examination of the principle of the method and to a definition of instantaneous power density spectrum attached to the signal, and to the first consequences of this definition. In this way, a two-dimensional representation of the signal is introduced which is less deformed by the analysis system properties than the usual representation, and which moreover has the advantage of being obtainable practically in real time [fr

  6. Optimal Feature Selection in High-Dimensional Discriminant Analysis.

    Science.gov (United States)

    Kolar, Mladen; Liu, Han

    2015-02-01

    We consider the high-dimensional discriminant analysis problem. For this problem, different methods have been proposed and justified by establishing exact convergence rates for the classification risk, as well as the ℓ 2 convergence results to the discriminative rule. However, sharp theoretical analysis for the variable selection performance of these procedures have not been established, even though model interpretation is of fundamental importance in scientific data analysis. This paper bridges the gap by providing sharp sufficient conditions for consistent variable selection using the sparse discriminant analysis (Mai et al., 2012). Through careful analysis, we establish rates of convergence that are significantly faster than the best known results and admit an optimal scaling of the sample size n , dimensionality p , and sparsity level s in the high-dimensional setting. Sufficient conditions are complemented by the necessary information theoretic limits on the variable selection problem in the context of high-dimensional discriminant analysis. Exploiting a numerical equivalence result, our method also establish the optimal results for the ROAD estimator (Fan et al., 2012) and the sparse optimal scaling estimator (Clemmensen et al., 2011). Furthermore, we analyze an exhaustive search procedure, whose performance serves as a benchmark, and show that it is variable selection consistent under weaker conditions. Extensive simulations demonstrating the sharpness of the bounds are also provided.

  7. Analysis of three-dimensional transonic compressors

    Science.gov (United States)

    Bourgeade, A.

    1984-01-01

    A method for computing the three-dimensional transonic flow around the blades of a compressor or of a propeller is given. The method is based on the use of the velocity potential, on the hypothesis that the flow is inviscid, irrotational and isentropic. The equation of the potential is solved in a transformed space such that the surface of the blade is mapped into a plane where the periodicity is implicit. This equation is in a nonconservative form and is solved with the help of a finite difference method using artificial time. A computer code is provided and some sample results are given in order to demonstrate the influence of three-dimensional effects and the blade's rotation.

  8. Dimensional analysis using toric ideals: primitive invariants.

    Directory of Open Access Journals (Sweden)

    Mark A Atherton

    Full Text Available Classical dimensional analysis in its original form starts by expressing the units for derived quantities, such as force, in terms of power products of basic units [Formula: see text] etc. This suggests the use of toric ideal theory from algebraic geometry. Within this the Graver basis provides a unique primitive basis in a well-defined sense, which typically has more terms than the standard Buckingham approach. Some textbook examples are revisited and the full set of primitive invariants found. First, a worked example based on convection is introduced to recall the Buckingham method, but using computer algebra to obtain an integer [Formula: see text] matrix from the initial integer [Formula: see text] matrix holding the exponents for the derived quantities. The [Formula: see text] matrix defines the dimensionless variables. But, rather than this integer linear algebra approach it is shown how, by staying with the power product representation, the full set of invariants (dimensionless groups is obtained directly from the toric ideal defined by [Formula: see text]. One candidate for the set of invariants is a simple basis of the toric ideal. This, although larger than the rank of [Formula: see text], is typically not unique. However, the alternative Graver basis is unique and defines a maximal set of invariants, which are primitive in a simple sense. In addition to the running example four examples are taken from: a windmill, convection, electrodynamics and the hydrogen atom. The method reveals some named invariants. A selection of computer algebra packages is used to show the considerable ease with which both a simple basis and a Graver basis can be found.

  9. Random projection-based dimensionality reduction method for hyperspectral target detection

    Science.gov (United States)

    Feng, Weiyi; Chen, Qian; He, Weiji; Arce, Gonzalo R.; Gu, Guohua; Zhuang, Jiayan

    2015-09-01

    Dimensionality reduction is a frequent preprocessing step in hyperspectral image analysis. High-dimensional data will cause the issue of the "curse of dimensionality" in the applications of hyperspectral imagery. In this paper, a dimensionality reduction method of hyperspectral images based on random projection (RP) for target detection was investigated. In the application areas of hyperspectral imagery, e.g. target detection, the high dimensionality of the hyperspectral data would lead to burdensome computations. Random projection is attractive in this area because it is data independent and computationally more efficient than other widely-used hyperspectral dimensionality-reduction methods, such as Principal Component Analysis (PCA) or the maximum-noise-fraction (MNF) transform. In RP, the original highdimensional data is projected onto a low dimensional subspace using a random matrix, which is very simple. Theoretical and experimental results indicated that random projections preserved the structure of the original high-dimensional data quite well without introducing significant distortion. In the experiments, Constrained Energy Minimization (CEM) was adopted as the target detector and a RP-based CEM method for hyperspectral target detection was implemented to reveal that random projections might be a good alternative as a dimensionality reduction tool of hyperspectral images to yield improved target detection with higher detection accuracy and lower computation time than other methods.

  10. Epistasis analysis using multifactor dimensionality reduction.

    Science.gov (United States)

    Moore, Jason H; Andrews, Peter C

    2015-01-01

    Here we introduce the multifactor dimensionality reduction (MDR) methodology and software package for detecting and characterizing epistasis in genetic association studies. We provide a general overview of the method and then highlight some of the key functions of the open-source MDR software package that is freely distributed. We end with a few examples of published studies of complex human diseases that have used MDR.

  11. On stability and reflection-transmission analysis of the bipenalty method in contact-impact problems: A one-dimensional, homogeneous case study

    Czech Academy of Sciences Publication Activity Database

    Kopačka, Ján; Tkachuk, A.; Gabriel, Dušan; Kolman, Radek; Bischoff, M.; Plešek, Jiří

    2018-01-01

    Roč. 113, č. 10 (2018), s. 1607-1629 ISSN 0029-5981 R&D Projects: GA MŠk(CZ) EF15_003/0000493; GA ČR(CZ) GA17-22615S; GA ČR GA17-12925S; GA ČR(CZ) GA16-03823S Grant - others:AV ČR(CZ) DAAD-16-12 Program:Bilaterální spolupráce Institutional support: RVO:61388998 Keywords : bipenalty method * explicit time integration * finite element method * penalty method * reflection-transmission analysis * stability analysis Subject RIV: JC - Computer Hardware ; Software OBOR OECD: Applied mechanics Impact factor: 2.162, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/nme.5712/full

  12. Investigation of the spatial structure of des-Gly9-[Arg8]vasopressin by the methods of two-dimensional NMR spectroscopy and theoretical conformational analysis

    International Nuclear Information System (INIS)

    Shenderovich, M.D.; Sekatsis, I.P.; Liepin'sh, E.E.; Nikiforovich, G.V.; Papsuevich, O.S.

    1986-01-01

    An assignment of the 1 H NMR signals of des-Gly 9 -[Arg 8 ]vasopressin in dimethyl sulfoxide has been made by 2D spectroscopy. The SSCCs and temperature coefficients Δδ/Δ T have been obtained for the amide protons and the system of NOE cross-peaks in the two-dimensional NOESY spectrum has been analyzed. The most important information on the spatial structure of des-Gly 9 -[Arg 8 ]vasopressin is given by the low value of the temperature coefficient Δδ/Δ T of the Asn 5 amide proton and the NOE between the α-protons of Cys 1 and Cys 6 . It is assumed that the screening of the NH proton of the Asn 5 residue from the solvent is connected with a β-bend of the backbone in the 2-5 sequence, and the distance between the C/sup α/H atoms of the Cys 1 and Cys 6 residues does not exceed 4 A. Bearing these limitations in mind, a theoretical conformational analysis of the molecule has been made. The group of low-energy conformations of the backbone obtained has been compared with the complete set of NMR characteristics

  13. Two-Dimensional Fourier Transform Analysis of Helicopter Flyover Noise

    Science.gov (United States)

    SantaMaria, Odilyn L.; Farassat, F.; Morris, Philip J.

    1999-01-01

    A method to separate main rotor and tail rotor noise from a helicopter in flight is explored. Being the sum of two periodic signals of disproportionate, or incommensurate frequencies, helicopter noise is neither periodic nor stationary. The single Fourier transform divides signal energy into frequency bins of equal size. Incommensurate frequencies are therefore not adequately represented by any one chosen data block size. A two-dimensional Fourier analysis method is used to separate main rotor and tail rotor noise. The two-dimensional spectral analysis method is first applied to simulated signals. This initial analysis gives an idea of the characteristics of the two-dimensional autocorrelations and spectra. Data from a helicopter flight test is analyzed in two dimensions. The test aircraft are a Boeing MD902 Explorer (no tail rotor) and a Sikorsky S-76 (4-bladed tail rotor). The results show that the main rotor and tail rotor signals can indeed be separated in the two-dimensional Fourier transform spectrum. The separation occurs along the diagonals associated with the frequencies of interest. These diagonals are individual spectra containing only information related to one particular frequency.

  14. Three-dimensional (3D) analysis of the temporomandibular joint

    DEFF Research Database (Denmark)

    Kitai, N.; Kreiborg, S.; Murakami, S.

    Symposium Orthodontics 2001: Where are We Now? Where are We Going?, three-dimensional analysis, temporomandibular joint......Symposium Orthodontics 2001: Where are We Now? Where are We Going?, three-dimensional analysis, temporomandibular joint...

  15. Three dimensional internal electromagnetic pulse calculated by particle source method

    International Nuclear Information System (INIS)

    Wang Yuzhi; Wang Taichun

    1986-01-01

    The numerical results of the primary electric current and the internal electromagnetic pulse were obtained by particle method in the rectanglar cavity. The results obtained from this method is compared with three dimensional Euler-method. It is shown that two methods are in good agreement if the conditions are the same

  16. Direct Linear Transformation Method for Three-Dimensional Cinematography

    Science.gov (United States)

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  17. Proteome research : two-dimensional gel electrophoresis and identification methods

    National Research Council Canada - National Science Library

    Rabilloud, Thierry, 1961

    2000-01-01

    "Two-dimensional electrophoresis is the central methodology in proteome research, and the state of the art is described in detail in this text, together with extensive coverage of the detection methods available...

  18. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  19. A roadmap to multifactor dimensionality reduction methods

    OpenAIRE

    Gola, Damian; Mahachie John, Jestinah M.; van Steen, Kristel; K?nig, Inke R.

    2015-01-01

    Complex diseases are defined to be determined by multiple genetic and environmental factors alone as well as in interactions. To analyze interactions in genetic data, many statistical methods have been suggested, with most of them relying on statistical regression models. Given the known limitations of classical methods, approaches from the machine-learning community have also become attractive. From this latter family, a fast-growing collection of methods emerged that are based on the Multif...

  20. Curvilinear component analysis for nonlinear dimensionality reduction of hyperspectral images

    Science.gov (United States)

    Lennon, Marc; Mercier, Gregoire; Mouchot, Marie-Catherine; Hubert-Moy, Laurence

    2002-01-01

    This paper presents a multidimensional data nonlinear projection method applied to the dimensionality reduction of hyperspectral images. The method, called Curvilinear Component Analysis (CCA) consists in reproducing at best the topology of the joint distribution of the data in a projection subspace whose dimension is lower than the dimension of the initial space, thus preserving a maximum amount of information. The Curvilinear Distance Analysis (CDA) is an improvement of the CCA that allows data including high nonlinearities to be projected. Its interest for reducing the dimension of hyperspectral images is shown. The results are presented on real hyperspectral images and compared with usual linear projection methods.

  1. Explorative data analysis of two-dimensional electrophoresis gels

    DEFF Research Database (Denmark)

    Schultz, J.; Gottlieb, D.M.; Petersen, Marianne Kjerstine

    2004-01-01

    Methods for classification of two-dimensional (2-DE) electrophoresis gels based on multivariate data analysis are demonstrated. Two-dimensional gels of ten wheat varieties are analyzed and it is demonstrated how to classify the wheat varieties in two qualities and a method for initial screening...... of gels is presented. First, an approach is demonstrated in which no prior knowledge of the separated proteins is used. Alignment of the gels followed by a simple transformation of data makes it possible to analyze the gels in an automated explorative manner by principal component analysis, to determine...... if the gels should be further analyzed. A more detailed approach is done by analyzing spot volume lists by principal components analysis and partial least square regression. The use of spot volume data offers a mean to investigate the spot pattern and link the classified protein patterns to distinct spots...

  2. Variational four-dimensional analysis using quasi-geostrophic constraints

    Science.gov (United States)

    Derber, John C.

    1987-01-01

    A variational four-dimensional analysis technique using quasi-geostrophic models as constraints is examined using gridded fields as data. The analysis method uses a standard iterative nonlinear minimization technique to find the solution to the constraining forecast model which best fits the data as measured by a predefined functional. The minimization algorithm uses the derivative of the functional with respect to each of the initial condition values. This derivative vector is found by inserting the weighted differences between the model solution and the inserted data into a backwards integrating adjoint model. The four-dimensional analysis system was examined by applying it to fields created from a primitive equations model forecast and to fields created from satellite retrievals. The results show that the technique has several interesting characteristics not found in more traditional four-dimensional assimilation techniques. These features include a close fit of the model solution to the observations throughout the analysis interval and an insensitivity to the frequency of data insertion or the amount of data. The four-dimensional analysis technique is very versatile and can be extended to more complex problems with little theoretical difficulty.

  3. DFT and two-dimensional correlation analysis methods for evaluating the Pu(3+)-Pu(4+) electronic transition of plutonium-doped zircon.

    Science.gov (United States)

    Bian, Liang; Dong, Fa-qin; Song, Mian-xin; Dong, Hai-liang; Li, Wei-Min; Duan, Tao; Xu, Jin-bao; Zhang, Xiao-yan

    2015-08-30

    Understanding how plutonium (Pu) doping affects the crystalline zircon structure is very important for risk management. However, so far, there have been only a very limited number of reports of the quantitative simulation of the effects of the Pu charge and concentration on the phase transition. In this study, we used density functional theory (DFT), virtual crystal approximation (VCA), and two-dimensional correlation analysis (2D-CA) techniques to calculate the origins of the structural and electronic transitions of Zr1-cPucSiO4 over a wide range of Pu doping concentrations (c=0-10mol%). The calculations indicated that the low-angular-momentum Pu-fxy-shell electron excites an inner-shell O-2s(2) orbital to create an oxygen defect (VO-s) below c=2.8mol%. This oxygen defect then captures a low-angular-momentum Zr-5p(6)5s(2) electron to form an sp hybrid orbital, which exhibits a stable phase structure. When c>2.8mol%, each accumulated VO-p defect captures a high-angular-momentum Zr-4dz electron and two Si-pz electrons to create delocalized Si(4+)→Si(2+) charge disproportionation. Therefore, we suggest that the optimal amount of Pu cannot exceed 7.5mol% because of the formation of a mixture of ZrO8 polyhedral and SiO4 tetrahedral phases with the orientation (10-1). This study offers new perspective on the development of highly stable zircon-based solid solution materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Three-dimensional space charge calculation method

    International Nuclear Information System (INIS)

    Lysenko, W.P.; Wadlinger, E.A.

    1981-01-01

    A method is presented for calculating space-charge forces suitable for use in a particle tracing code. Poisson's equation is solved in three dimensions with boundary conditions specified on an arbitrary surface by using a weighted residual method. Using a discrete particle distribution as our source input, examples are shown of off-axis, bunched beams of noncircular crosssection in radio-frequency quadrupole (RFQ) and drift-tube linac geometries

  5. Three-dimensional turbopump flowfield analysis

    Science.gov (United States)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  6. Three-dimensional ultrasound palmprint recognition using curvature methods

    Science.gov (United States)

    Iula, Antonio; Nardiello, Donatella

    2016-05-01

    Palmprint recognition systems that use three-dimensional (3-D) information of the palm surface are the most recently explored techniques to overcome some two-dimensional palmprint difficulties. These techniques are based on light structural imaging. In this work, a 3-D ultrasound palmprint recognition system is proposed and evaluated. Volumetric images of a region of the human hand are obtained by moving an ultrasound linear array along its elevation direction and one by one acquiring a number of B-mode images, which are then grouped in a 3-D matrix. The acquisition time was contained in about 5 s. Much information that can be exploited for 3-D palmprint recognition is extracted from the ultrasound volumetric images, including palm curvature and other under-skin information as the depth of the various traits. The recognition procedure developed in this work is based on the analysis of the principal curvatures of palm surface, i.e., mean curvature image, Gaussian curvature image, and surface type. The proposed method is evaluated by performing verification and identification experiments. Preliminary results have shown that the proposed system exhibits an acceptable recognition rate. Further possible improvements of the proposed technique are finally highlighted and discussed.

  7. Two-dimensional versus three-dimensional laparoscopy in surgical efficacy: a systematic review and meta-analysis.

    Science.gov (United States)

    Cheng, Ji; Gao, Jinbo; Shuai, Xiaoming; Wang, Guobin; Tao, Kaixiong

    2016-10-25

    Laparoscopy is a revolutionary technique in modern surgery. However, the comparative efficacy between two-dimensional laparoscopy and three-dimensional laparoscopy remains in uncertainty. Therefore we performed this systematic review and meta-analysis in order to seek for answers. Databases of PubMed, Web of Science, EMBASE and Cochrane Library were carefully screened. Clinical trials comparing two-dimensional versus three-dimensional laparoscopy were included for pooled analysis. Observational and randomized trials were methodologically appraised by Newcastle-Ottawa Scale and Revised Jadad's Scale respectively. Subgroup analyses were additionally conducted to clarify the potential confounding elements. Outcome stability was examined by sensitivity analysis, and publication bias was analyzed by Begg's test and Egger's test. 21 trials were screened out from the preliminary 3126 records. All included studies were high-quality in methodology, except for Bilgen 2013 and Ruan 2015. Three-dimensional laparoscopy was superior to two-dimensional laparoscopy in terms of surgical time (P analysis. Although Begg's test (P = 0.215) and Egger's test (P = 0.003) revealed that there was publication bias across included studies, Trim-and-Fill method confirmed that the results remained stable. Three-dimensional laparoscopy is a preferably surgical option against two-dimensional laparoscopy due to its better surgical efficacy.

  8. DFT and two-dimensional correlation analysis methods for evaluating the Pu{sup 3+}–Pu{sup 4+} electronic transition of plutonium-doped zircon

    Energy Technology Data Exchange (ETDEWEB)

    Bian, Liang, E-mail: bianliang@ms.xjb.ac.cn [Key Laboratory of Functional Materials and Devices for Special Environments, Chinese Academy of Sciences, Urumqi 830011, Xinjiang (China); Laboratory for Extreme Conditions Matter Properties, South West University of Science and Technology, Mianyang 621010, Sichuan (China); Dong, Fa-qin; Song, Mian-xin [Laboratory for Extreme Conditions Matter Properties, South West University of Science and Technology, Mianyang 621010, Sichuan (China); Dong, Hai-liang [Department of Geology and Environmental Earth Science, Miami University, Oxford, OH 45056 (United States); Li, Wei-Min [Key Laboratory of Functional Materials and Devices for Special Environments, Chinese Academy of Sciences, Urumqi 830011, Xinjiang (China); Duan, Tao; Xu, Jin-bao [Laboratory for Extreme Conditions Matter Properties, South West University of Science and Technology, Mianyang 621010, Sichuan (China); Zhang, Xiao-yan [Key Laboratory of Functional Materials and Devices for Special Environments, Chinese Academy of Sciences, Urumqi 830011, Xinjiang (China); Laboratory for Extreme Conditions Matter Properties, South West University of Science and Technology, Mianyang 621010, Sichuan (China)

    2015-08-30

    Highlights: • Effect of Pu f-shell electron on the electronic property of zircon is calculated via DFT and 2D-CA techniques. • Reasons of Pu f-shell electron influencing on electronic properties are systematically discussed. • Phase transitions are found at two point 2.8 mol% and 7.5 mol%. - Abstract: Understanding how plutonium (Pu) doping affects the crystalline zircon structure is very important for risk management. However, so far, there have been only a very limited number of reports of the quantitative simulation of the effects of the Pu charge and concentration on the phase transition. In this study, we used density functional theory (DFT), virtual crystal approximation (VCA), and two-dimensional correlation analysis (2D-CA) techniques to calculate the origins of the structural and electronic transitions of Zr{sub 1−c}Pu{sub c}SiO{sub 4} over a wide range of Pu doping concentrations (c = 0–10 mol%). The calculations indicated that the low-angular-momentum Pu-f{sub xy}-shell electron excites an inner-shell O-2s{sup 2} orbital to create an oxygen defect (V{sub O-s}) below c = 2.8 mol%. This oxygen defect then captures a low-angular-momentum Zr-5p{sup 6}5s{sup 2} electron to form an sp hybrid orbital, which exhibits a stable phase structure. When c > 2.8 mol%, each accumulated V{sub O-p} defect captures a high-angular-momentum Zr-4d{sub z} electron and two Si-p{sub z} electrons to create delocalized Si{sup 4+} → Si{sup 2+} charge disproportionation. Therefore, we suggest that the optimal amount of Pu cannot exceed 7.5 mol% because of the formation of a mixture of ZrO{sub 8} polyhedral and SiO{sub 4} tetrahedral phases with the orientation (10-1). This study offers new perspective on the development of highly stable zircon-based solid solution materials.

  9. A simulation analysis of an extension of one-dimensional speckle correlation method for detection of general in-plane translation

    Czech Academy of Sciences Publication Activity Database

    Hamarová, Ivana; Šmíd, Petr; Horváth, P.; Hrabovský, M.

    2014-01-01

    Roč. 2014, č. 1 (2014), "704368-1"-"704368-12" ISSN 1537-744X R&D Projects: GA ČR GA13-12301S Institutional support: RVO:68378271 Keywords : one-dimensional speckle correlation * speckle * general In-plane translation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.219, year: 2013

  10. System for generating two-dimensional masks from a three-dimensional model using topological analysis

    Science.gov (United States)

    Schiek, Richard [Albuquerque, NM

    2006-06-20

    A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.

  11. High-dimensional data in economics and their (robust analysis

    Directory of Open Access Journals (Sweden)

    Jan Kalina

    2017-05-01

    Full Text Available This work is devoted to statistical methods for the analysis of economic data with a large number of variables. The authors present a review of references documenting that such data are more and more commonly available in various theoretical and applied economic problems and their analysis can be hardly performed with standard econometric methods. The paper is focused on highdimensional data, which have a small number of observations, and gives an overview of recently proposed methods for their analysis in the context of econometrics, particularly in the areas of dimensionality reduction, linear regression and classification analysis. Further, the performance of various methods is illustrated on a publicly available benchmark data set on credit scoring. In comparison with other authors, robust methods designed to be insensitive to the presence of outlying measurements are also used. Their strength is revealed after adding an artificial contamination by noise to the original data. In addition, the performance of various methods for a prior dimensionality reduction of the data is compared.

  12. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  13. Development of a rapid method for the quantitative analysis of four methoxypyrazines in white and red wine using multi-dimensional Gas Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Botezatu, Andreea; Pickering, Gary J; Kotseridis, Yorgos

    2014-10-01

    Alkyl-methoxypyrazines (MPs) are important odour-active constituents of many grape cultivars and their wines. Recently, a new MP - 2,5-dimethyl-3-methoxypyrazine (DMMP) - has been reported as a possible constituent of wine. This study sought to develop a rapid and reliable method for quantifying DMMP, isopropyl methoxypyrazine (IPMP), secbutyl methoxypyrazine (SBMP) and isobutyl methoxypyrazine (IBMP) in wine. The proposed method is able to rapidly and accurately resolve all 4 MPs in a range of wine styles, with limits of detection between 1 and 2 ng L(-1) for IPMP, SBMP and IBMP and 5 ng L(-1) for DMMP. Analysis of a set of 11 commercial wines agrees with previously published values for IPMP, SBMP and IBMP, and shows for the first time that DMMP may be an important and somewhat common odorant in red wines. To our knowledge, this is the first analytical method developed for the quantification of DMMP in wine. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  15. Signal processing using methods of dimensionality reduction of representation space

    OpenAIRE

    Popovskiy, V. V.

    2003-01-01

    The universal method of processing of signals with the help of downturn of their dimensions is considered. This work has methodical, generalizing nature, and is aimed at drawing the attention of specialist and scientist to the unity of the definitions and solutions of the problems, connected with N-dimensional presentation and confluent presentations.

  16. Echocardiography without electrocardiogram using nonlinear dimensionality reduction methods.

    Science.gov (United States)

    Shalbaf, Ahmad; AlizadehSani, Zahra; Behnam, Hamid

    2015-04-01

    The aim of this study is to evaluate the efficiency of a new automatic image processing technique, based on nonlinear dimensionality reduction (NLDR) to separate a cardiac cycle and also detect end-diastole (ED) (cardiac cycle start) and end-systole (ES) frames on an echocardiography system without using ECG. Isometric feature mapping (Isomap) and locally linear embeddings (LLE) are the most popular NLDR algorithms. First, Isomap algorithm is applied on recorded echocardiography images. By this approach, the nonlinear embedded information in sequential images is represented in a two-dimensional manifold and each image is characterized by a symbol on the constructed manifold. Cyclicity analysis of the resultant manifold, which is derived from the cyclic nature of the heart motion, is used to perform cardiac cycle length estimation. Then, LLE algorithm is applied on extracted left ventricle (LV) echocardiography images of one cardiac cycle. Finally, the relationship between consecutive symbols of the resultant manifold by the LLE algorithm, which is based on LV volume changes, is used to estimate ED (cycle start) and ES frames. The proposed algorithms are quantitatively compared to those obtained by a highly experienced echocardiographer from ECG as a reference in 20 healthy volunteers and 12 subjects with pathology. Mean difference in cardiac cycle length, ED, and ES frame estimation between our method and ECG detection by the experienced echocardiographer is approximately 7, 17, and 17 ms (0.4, 1, and 1 frame), respectively. The proposed image-based method, based on NLDR, can be used as a useful tool for estimation of cardiac cycle length, ED and ES frames in echocardiography systems, with good agreement to ECG assessment by an experienced echocardiographer in routine clinical evaluation.

  17. Research on a Rotating Machinery Fault Prognosis Method Using Three-Dimensional Spatial Representations

    Directory of Open Access Journals (Sweden)

    Xiaoni Dong

    2016-01-01

    Full Text Available Process models and parameters are two critical steps for fault prognosis in the operation of rotating machinery. Due to the requirement for a short and rapid response, it is important to study robust sensor data representation schemes. However, the conventional holospectrum defined by one-dimensional or two-dimensional methods does not sufficiently present this information in both the frequency and time domains. To supply a complete holospectrum model, a new three-dimensional spatial representation method is proposed. This method integrates improved three-dimensional (3D holospectra and 3D filtered orbits, leading to the integration of radial and axial vibration features in one bearing section. The results from simulation and experimental analysis on a complex compressor show that the proposed method can present the real operational status and clearly reveal early faults, thus demonstrating great potential for condition-based maintenance prediction in industrial machinery.

  18. Calculation of two-dimensional thermal transients by the method of finite elements

    International Nuclear Information System (INIS)

    Fontoura Rodrigues, J.L.A. da.

    1980-08-01

    The unsteady linear heat conduction analysis throught anisotropic and/or heterogeneous matter, in either two-dimensional fields with any kind of geometry or three-dimensional fields with axial symmetry is presented. The boundary conditions and the internal heat generation are supposed time - independent. The solution is obtained by modal analysis employing the finite element method under Galerkin formulation. Optionally, it can be used with a reduced resolution method called Stoker Economizing Method wich allows a decrease on the program processing costs. (Author) [pt

  19. Recent Advances in Fragment-Based QSAR and Multi-Dimensional QSAR Methods

    OpenAIRE

    Myint, Kyaw Zeyar; Xie, Xiang-Qun

    2010-01-01

    This paper provides an overview of recently developed two dimensional (2D) fragment-based QSAR methods as well as other multi-dimensional approaches. In particular, we present recent fragment-based QSAR methods such as fragment-similarity-based QSAR (FS-QSAR), fragment-based QSAR (FB-QSAR), Hologram QSAR (HQSAR), and top priority fragment QSAR in addition to 3D- and nD-QSAR methods such as comparative molecular field analysis (CoMFA), comparative molecular similarity analysis (CoMSIA), Topome...

  20. A four-dimensional analysis exactly satisfying equations of motion. [for atmospheric circulation

    Science.gov (United States)

    Hoffman, R. N.

    1986-01-01

    A four-dimensional analysis is applied to spectral nonlinear models of the atmosphere. The experiment reveals that the four-dimensional analysis errors are smaller than measurement errors, the method is stable in an assimilation cycle, and an accurate estimate of the velocity field is maintained using only temperature observations. It is concluded that the four-dimensional analyses display rapid initial error growth and therefore are better than ordinary forecasts from observations for only the first 24 hours.

  1. A New Three-Dimensional Cephalometric Analysis for Orthognathic Surgery

    Science.gov (United States)

    Gateno, Jaime; Xia, James J.; Teichgraeber, John F.

    2010-01-01

    Two basic problems are associated with traditional 2-dimensional ((2D) cephalometry First, many important parameters cannot be measured on plain cephalograms; and second, most 2D cephalometric measurements are distorted in the presence of facial asymmetry. Three-dimensional (3D) cephalometry, which has been facilitated by the introduction of cone beam computed tomography scans, can be solved these problems. However, before this can be realized, fundamental problems must be solved. They are the unreliability of internal reference systems and some 3D measurements, and the lack of tools to assess and measure symmetry. In this manuscript, the authors present a new 3D cephalometric analysis that uses different geometric approaches to solve the fundamental problems previously mentioned. This analysis allows the accurate measurement of the size, shape, position and orientation of the different facial units and incorporates a novel method to measure asymmetry. PMID:21257250

  2. ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis.

    Science.gov (United States)

    Pierson, Emma; Yau, Christopher

    2015-11-02

    Single-cell RNA-seq data allows insight into normal cellular function and various disease states through molecular characterization of gene expression on the single cell level. Dimensionality reduction of such high-dimensional data sets is essential for visualization and analysis, but single-cell RNA-seq data are challenging for classical dimensionality-reduction methods because of the prevalence of dropout events, which lead to zero-inflated data. Here, we develop a dimensionality-reduction method, (Z)ero (I)nflated (F)actor (A)nalysis (ZIFA), which explicitly models the dropout characteristics, and show that it improves modeling accuracy on simulated and biological data sets.

  3. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  4. Dimensionality reduction method based on a tensor model

    Science.gov (United States)

    Yan, Ronghua; Peng, Jinye; Ma, Dongmei; Wen, Desheng

    2017-04-01

    Dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis reduces the spectral dimension and does not utilize the spatial information of an HSI. Both spatial and spectral information are used when an HSI is modeled as a tensor, that is, the noise in the spatial dimension is decreased and the dimension in a spectral dimension is reduced simultaneously. However, this model does not consider factors affecting the spectral signatures of ground objects. This means that further improving classification is very difficult. The authors propose that the spectral signatures of ground objects are the composite result of multiple factors, such as illumination, mixture, atmospheric scattering and radiation, and so on. In addition, these factors are very difficult to distinguish. Therefore, these factors are synthesized as within-class factors. Within-class factors, class factors, and pixels are selected to model a third-order tensor. Experimental results indicate that the classification accuracy of the new method is higher than that of the previous methods.

  5. A Dimensionality Reduction Technique for Efficient Time Series Similarity Analysis

    Science.gov (United States)

    Wang, Qiang; Megalooikonomou, Vasileios

    2008-01-01

    We propose a dimensionality reduction technique for time series analysis that significantly improves the efficiency and accuracy of similarity searches. In contrast to piecewise constant approximation (PCA) techniques that approximate each time series with constant value segments, the proposed method--Piecewise Vector Quantized Approximation--uses the closest (based on a distance measure) codeword from a codebook of key-sequences to represent each segment. The new representation is symbolic and it allows for the application of text-based retrieval techniques into time series similarity analysis. Experiments on real and simulated datasets show that the proposed technique generally outperforms PCA techniques in clustering and similarity searches. PMID:18496587

  6. A Dimensionality Reduction Technique for Efficient Time Series Similarity Analysis.

    Science.gov (United States)

    Wang, Qiang; Megalooikonomou, Vasileios

    2008-03-01

    We propose a dimensionality reduction technique for time series analysis that significantly improves the efficiency and accuracy of similarity searches. In contrast to piecewise constant approximation (PCA) techniques that approximate each time series with constant value segments, the proposed method--Piecewise Vector Quantized Approximation--uses the closest (based on a distance measure) codeword from a codebook of key-sequences to represent each segment. The new representation is symbolic and it allows for the application of text-based retrieval techniques into time series similarity analysis. Experiments on real and simulated datasets show that the proposed technique generally outperforms PCA techniques in clustering and similarity searches.

  7. hp Spectral element methods for three dimensional elliptic problems ...

    Indian Academy of Sciences (India)

    125, No. 3, August 2015, pp. 413–447. c Indian Academy of Sciences h-p Spectral element methods for three dimensional elliptic problems on non-smooth domains, Part-II: Proof of stability theorem. P DUTT1, AKHLAQ HUSAIN2,∗, A S VASUDEVA MURTHY3 and C S UPADHYAY4. 1Department of Mathematics & Statistics ...

  8. TreePM Method for Two-Dimensional Cosmological Simulations ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    paper. The 2d TreePM code is an accurate and efficient technique to carry out large two-dimensional N-body simulations in cosmology. This hybrid code combines the 2d Barnes and Hut Tree method and the 2d Particle– ..... ment, we need less than 75 MB of RAM for a simulation with 10242 particles on a. 10242 grid.

  9. Eye-movements in a two-dimensional plane: A method for calibration and analysis using the vertical and horizontal EOG

    NARCIS (Netherlands)

    Slangen, J.L.; Woestenburg, J.C.; Verbaten, M.N.

    1984-01-01

    A method for calibration, orthogonalization and standardization of eye movements is described. The method is based on linear transformation of the horizontal and vertical EOG. With this method it is possible to measure the locus of eye fixation on a TV screen and its associated fixation time.

  10. 3-Dimensional Stochastic Seepage Analysis of a Yangtze River Embankment

    Directory of Open Access Journals (Sweden)

    Yajun Wang

    2015-01-01

    Full Text Available Three-dimensional stochastic simulation was performed to investigate the complexity of the seepage field of an embankment. Three-dimensional anisotropic heterogeneous steady state random seepage finite element model was developed. The material input data were derived from a statistical analysis of strata soil characteristics and geological columns. The Kolmogorov-Smirnov test was used to validate the hypothesis that the Gaussian probability distribution is applicable to the random permeability tensors. A stochastic boundary condition, the random variation of upstream and downstream water level, was taken into account in the three-dimensional finite element modelling. Furthermore, the functions of sheet-pile breakwater and catchwater were also incorporated as turbulent sources. This case, together with the variability of soil permeability, has been analyzed to investigate their influence on the hydraulic potential distributed and the random evolution of stochastic seepage field. Results from stochastic analyses have also been compared against those of deterministic analyses. The insights gained in this study suggest it is necessary, feasible, and practical to employ stochastic studies in seepage field problems. The method provides a more comprehensive stochastic algorithm than conventional ones to characterize and analyze three-dimensional random seepage field problems.

  11. Algorithmic dimensionality reduction for molecular structure analysis

    OpenAIRE

    Brown, W. Michael; Martin, Shawn; Pollock, Sara N.; Coutsias, Evangelos A.; Watson, Jean-Paul

    2008-01-01

    Dimensionality reduction approaches have been used to exploit the redundancy in a Cartesian coordinate representation of molecular motion by producing low-dimensional representations of molecular motion. This has been used to help visualize complex energy landscapes, to extend the time scales of simulation, and to improve the efficiency of optimization. Until recently, linear approaches for dimensionality reduction have been employed. Here, we investigate the efficacy of several automated alg...

  12. A duality view of spectral methods for dimensionality reduction

    OpenAIRE

    Xiao, Lin; Sun, Jun; Boyd, Stephen

    2006-01-01

    We present a unified duality view of several recently emerged spectral methods for nonlinear dimensionality reduction, including Isomap, locally linear embedding, Laplacian eigenmaps, and maximum variance unfolding. We discuss the duality theory for the maximum variance unfolding problem, and show that other methods are directly related to either its primal formulation or its dual formulation, or can be interpreted from the optimality conditions. This duality framework reveals close connectio...

  13. Two-dimensional gel electrophoretic method for mapping DNA replicons.

    OpenAIRE

    Nawotka, K A; Huberman, J A

    1988-01-01

    We describe in detail a method which allows determination of the directions of replication fork movement through segments of DNA for which cloned probes are available. The method uses two-dimensional neutral-alkaline agarose gel electrophoresis followed by hybridization with short probe sequences. The nascent strands of replicating molecules form an arc separated from parental and nonreplicating strands. The closer a probe is to its replication origin or to the origin-proximal end of its rest...

  14. Multicriteria classification method for dimensionality reduction adapted to hyperspectral images

    Science.gov (United States)

    Khoder, Mahdi; Kashana, Serge; Khoder, Jihan; Younes, Rafic

    2017-04-01

    Due to the incredible growth of high dimensional datasets, we address the problem of unsupervised methods sensitive to undergoing different variations, such as noise degradation, and to preserving rare information. Therefore, researchers nowadays are forced to develop techniques to meet the needed requirements. In this work, we introduce a dimensionality reduction method that focuses on the multiobjectives of multiple images taken from multiple frequency bands, which form a hyperspectral image. The multicriteria classification algorithm technique compares and classifies these images based on multiple similarity criteria, which allows the selection of particular images from the whole set of images. The selected images are the ones chosen to represent the original set of data while respecting certain quality thresholds. Knowing that the number of images in a hyperspectral image signifies its dimension, choosing a smaller number of images to represent the data leads to dimensionality reduction. Also, results of tests of the developed algorithm on multiple hyperspectral image samples are shown. A comparative study later on will show the advantages of this technique compared to other common methods used in the field of dimensionality reduction.

  15. A computational method for the solution of one-dimensional ...

    Indian Academy of Sciences (India)

    ... tanh method, sinh–cosh method, homotopy analysis method. (HAM), variational iteration method (VIM) and homotopy perturbation method (HPM). The homotopy perturbation method (HPM) was established by Ji-Huan He in 1999. The method has been used by many researchers to analyse a wide variety of scientific.

  16. Analysis of Two-Dimensional Electrophoresis Gel Images

    DEFF Research Database (Denmark)

    Pedersen, Lars

    2002-01-01

    This thesis describes and proposes solutions to some of the currently most important problems in pattern recognition and image analysis of two-dimensional gel electrophoresis (2DGE) images. 2DGE is the leading technique to separate individual proteins in biological samples with many biological...... and pharmaceutical applications, e.g., drug development. The technique results in an image, where the proteins appear as dark spots on a bright background. However, the analysis of these images is very time consuming and requires a large amount of manual work so there is a great need for fast, objective, and robust...... methods based on image analysis techniques in order to significantly accelerate this key technology. The methods described and developed fall into three categories: image segmentation, point pattern matching, and a unified approach simultaneously segmentation the image and matching the spots. The main...

  17. Characterizing Awake and Anesthetized States Using a Dimensionality Reduction Method.

    Science.gov (United States)

    Mirsadeghi, M; Behnam, H; Shalbaf, R; Jelveh Moghadam, H

    2016-01-01

    Distinguishing between awake and anesthetized states is one of the important problems in surgery. Vital signals contain valuable information that can be used in prediction of different levels of anesthesia. Some monitors based on electroencephalogram (EEG) such as the Bispectral (BIS) index have been proposed in recent years. This study proposes a new method for characterizing between awake and anesthetized states. We validated our method by obtaining data from 25 patients during the cardiac surgery that requires cardiopulmonary bypass. At first, some linear and non-linear features are extracted from EEG signals. Then a method called "LLE"(Locally Linear Embedding) is used to map high-dimensional features in a three-dimensional output space. Finally, low dimensional data are used as an input to a quadratic discriminant analyzer (QDA). The experimental results indicate that an overall accuracy of 88.4 % can be obtained using this method for classifying the EEG signal into conscious and unconscious states for all patients. Considering the reliability of this method, we can develop a new EEG monitoring system that could assist the anesthesiologists to estimate the depth of anesthesia accurately.

  18. Comparison of dimensionality reduction methods for wood surface inspection

    Science.gov (United States)

    Niskanen, Matti; Silven, Olli

    2003-04-01

    Dimensionality reduction methods for visualization map the original high-dimensional data typically into two dimensions. Mapping preserves the important information of the data, and in order to be useful, fulfils the needs of a human observer. We have proposed a self-organizing map (SOM)- based approach for visual surface inspection. The method provides the advantages of unsupervised learning and an intuitive user interface that allows one to very easily set and tune the class boundaries based on observations made on visualization, for example, to adapt to changing conditions or material. There are, however, some problems with a SOM. It does not address the true distances between data, and it has a tendency to ignore rare samples in the training set at the expense of more accurate representation of common samples. In this paper, some alternative methods for a SOM are evaluated. These methods, PCA, MDS, LLE, ISOMAP, and GTM, are used to reduce dimensionality in order to visualize the data. Their principal differences are discussed and performances quantitatively evaluated in a few special classification cases, such as in wood inspection using centile features. For the test material experimented with, SOM and GTM outperform the others when classification performance is considered. For data mining kinds of applications, ISOMAP and LLE appear to be more promising methods.

  19. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    Science.gov (United States)

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  20. Dimensional analysis of membrane distillation flux through fibrous membranes

    Science.gov (United States)

    Mauter, Meagan

    We developed a dimensional-analysis-based empirical modeling method for membrane distillation (MD) flux that is adaptable for novel membrane structures. The method makes fewer simplifying assumptions about membrane pore geometry than existing theoretical (i.e. mechanistic) models, and allows selection of simple, easily-measureable membrane characteristics as structural parameters. Furthermore, the model does not require estimation of membrane surface temperatures; it accounts for convective heat transfer to the membrane surface without iterative fitting of mass and heat transfer equations. The Buckingham-Pi dimensional analysis method is tested for direct contact membrane distillation (DCMD) using non-woven/fibrous structures as the model membrane material. Twelve easily-measured variables to describe DCMD operating conditions, fluid properties, membrane structures, and flux were identified and combined into eight dimensionless parameters. These parameters were regressed using experimentally-collected data for multiple electrospun membrane types and DCMD system conditions, achieving R2 values >95%. We found that vapor flux through isotropic fibrous membranes can be estimated using only membrane thickness, solid fraction, and fiber diameter as structural parameters. Buckingham-Pi model DCMD flux predictions compare favorably with previously-developed empirical and theoretical models, and suggest this simple yet theoretically-grounded empirical modeling method can be used practically for predicting MD vapor flux from membrane structural parameters.

  1. Sparse dimensionality reduction of hyperspectral image based on semi-supervised local Fisher discriminant analysis

    Science.gov (United States)

    Shao, Zhenfeng; Zhang, Lei

    2014-09-01

    This paper presents a novel sparse dimensionality reduction method of hyperspectral image based on semi-supervised local Fisher discriminant analysis (SELF). The proposed method is designed to be especially effective for dealing with the out-of-sample extrapolation to realize advantageous complementarities between SELF and sparsity preserving projections (SPP). Compared to SELF and SPP, the method proposed herein offers highly discriminative ability and produces an explicit nonlinear feature mapping for the out-of-sample extrapolation. This is due to the fact that the proposed method can get an explicit feature mapping for dimensionality reduction and improve the classification performance of classifiers by performing dimensionality reduction. Experimental analysis on the sparsity and efficacy of low dimensional outputs shows that, sparse dimensionality reduction based on SELF can yield good classification results and interpretability in the field of hyperspectral remote sensing.

  2. analysis methods of uranium

    International Nuclear Information System (INIS)

    Bekdemir, N.; Acarkan, S.

    1997-01-01

    There are various methods for the determination of uranium. The most often used methods are spectrophotometric (PAR, DBM and Arsenazo III) and potentiometric titration methods. For uranium contents between 1-300 g/LU potentiometric titration method based on oxidation-reduction reactions gives reliable results. PAR (1-pyridiyl-2-azo resorcinol) is a sensitive reagent for uranium, forming complexes in aqueous solutions. It is a suitable method for determination of uranium at concentrations between 2-400microgram U. In this study, the spectrophotometric and potentiometric analysis methods, used in the Nuclear Fuel Department will be discussed in detail and other methods and their principles will be briefly mentioned

  3. Four dimensional reconstruction and analysis of plume images

    Science.gov (United States)

    Dhawan, Atam P.; Peck, Charles, III; Disimile, Peter

    1991-01-01

    A number of methods have been investigated and are under current investigation for monitoring the health of the Space Shuttle Main Engine (SSME). Plume emission analysis has recently emerged as a potential technique for correlating the emission characteristics with the health of an engine. In order to correlate the visual and spectral signatures of the plume emission with the characteristic health monitoring features of the engine, the plume emission data must be acquired, stored, and analyzed in a manner similar to flame emission spectroscopy. The characteristic visual and spectral signatures of the elements vaporized in exhaust plume along with the features related to their temperature, pressure, and velocity can be analyzed once the images of plume emission are effectively acquired, digitized, and stored on a computer. Since the emission image varies with respect to time at a specified planar location, four dimensional visual and spectral analysis need to be performed on the plume emission data. In order to achieve this objective, feasibility research was conducted to digitize, store, analyze, and visualize the images of a subsonic jet in a cross flow. The jet structure was made visible using a direct injection flow visualization technique. The results of time-history based three dimensional reconstruction of the cross sectional images corresponding to a specific planar location of the jet structure are presented. The experimental set-up to acquire such data is described and three dimensional displays of time-history based reconstructions of the jet structure are discussed.

  4. Teaching the Falling Ball Problem with Dimensional Analysis

    Science.gov (United States)

    Sznitman, Josué; Stone, Howard A.; Smits, Alexander J.; Grotberg, James B.

    2013-01-01

    Dimensional analysis is often a subject reserved for students of fluid mechanics. However, the principles of scaling and dimensional analysis are applicable to various physical problems, many of which can be introduced early on in a university physics curriculum. Here, we revisit one of the best-known examples from a first course in classic…

  5. Learning Dimensional Analysis through Collaboratively Working with Manipulatives

    Science.gov (United States)

    Saitta, Erin K. H.; Gittings, Michael J.; Geiger, Cherie

    2011-01-01

    Dimensional analysis is traditionally one of the first topics covered in a general chemistry course. Chemists use dimensional analysis as a tool to keep track of units and guide them through calculations. Although unit conversions are taught in a variety of subjects over several grade levels, many students have not mastered this topic by the time…

  6. Wave field restoration using three-dimensional Fourier filtering method.

    Science.gov (United States)

    Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R

    2001-11-01

    A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer.

  7. Evaluation of Dimensionality-reduction Methods from Peptide Folding-unfolding Simulations

    Science.gov (United States)

    Duan, Mojie; Fan, Jue; Li, Minghai; Han, Li; Huo, Shuanghong

    2013-01-01

    Dimensionality reduction methods have been widely used to study the free energy landscapes and low-free energy pathways of molecular systems. It was shown that the non-linear dimensionality-reduction methods gave better embedding results than the linear methods, such as principal component analysis, in some simple systems. In this study, we have evaluated several non linear methods, locally linear embedding, Isomap, and diffusion maps, as well as principal component analysis from the equilibrium folding/unfolding trajectory of the second β–hairpin of the B1 domain of streptococcal protein G. The CHARMM parm19 polar hydrogen potential function was used. A series of criteria which reflects different aspects of the embedding qualities were employed in the evaluation. Our results show that principal component analysis is not worse than the non-linear ones on this complex system. There is no clear winner in all aspects of the evaluation. Each dimensionality-reduction method has its limitations in a certain aspect. We emphasize that a fair, informative assessment of an embedding result requires a combination of multiple evaluation criteria rather than any single one. Caution should be used when dimensionality-reduction methods are employed, especially when only a few of top embedding dimensions are used to describe the free energy landscape. PMID:23772182

  8. Evaluation of Dimensionality-reduction Methods from Peptide Folding-unfolding Simulations.

    Science.gov (United States)

    Duan, Mojie; Fan, Jue; Li, Minghai; Han, Li; Huo, Shuanghong

    2013-05-14

    Dimensionality reduction methods have been widely used to study the free energy landscapes and low-free energy pathways of molecular systems. It was shown that the non-linear dimensionality-reduction methods gave better embedding results than the linear methods, such as principal component analysis, in some simple systems. In this study, we have evaluated several non linear methods, locally linear embedding, Isomap, and diffusion maps, as well as principal component analysis from the equilibrium folding/unfolding trajectory of the second β-hairpin of the B1 domain of streptococcal protein G. The CHARMM parm19 polar hydrogen potential function was used. A series of criteria which reflects different aspects of the embedding qualities were employed in the evaluation. Our results show that principal component analysis is not worse than the non-linear ones on this complex system. There is no clear winner in all aspects of the evaluation. Each dimensionality-reduction method has its limitations in a certain aspect. We emphasize that a fair, informative assessment of an embedding result requires a combination of multiple evaluation criteria rather than any single one. Caution should be used when dimensionality-reduction methods are employed, especially when only a few of top embedding dimensions are used to describe the free energy landscape.

  9. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  11. Development of two dimensional electrophoresis method using single chain DNA

    International Nuclear Information System (INIS)

    Ikeda, Junichi; Hidaka, So

    1998-01-01

    By combining a separation method due to molecular weight and a method to distinguish difference of mono-bases, it was aimed to develop a two dimensional single chain DNA labeled with Radioisotope (RI). From electrophoretic pattern difference of parent and variant strands, it was investigated to isolate the root module implantation control gene. At first, a Single Strand Conformation Polymorphism (SSCP) method using concentration gradient gel was investigated. As a result, it was formed that intervals between double chain and single chain DNAs expanded, but intervals of both single chain DNAs did not expand. On next, combination of non-modified acrylic amide electrophoresis method and Denaturing Gradient-Gel Electrophoresis (DGGE) method was examined. As a result, hybrid DNA developed by two dimensional electrophoresis arranged on two lines. But, among them a band of DNA modified by high concentration of urea could not be found. Therefore, in this fiscal year's experiments, no preferable result could be obtained. By the used method, it was thought to be impossible to detect the differences. (G.K.)

  12. Methods of Multivariate Analysis

    CERN Document Server

    Rencher, Alvin C

    2012-01-01

    Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit

  13. Numerical method for two-dimensional unsteady reacting flows

    International Nuclear Information System (INIS)

    Butler, T.D.; O'Rourke, P.J.

    1976-01-01

    A method that numerically solves the full two-dimensional, time-dependent Navier-Stokes equations with species transport, mixing, and chemical reaction between species is presented. The generality of the formulation permits the solution of flows in which deflagrations, detonations, or transitions from deflagration to detonation are found. The solution procedure is embodied in the RICE computer program. RICE is an Eulerian finite difference computer code that uses the Implicit Continuous-fluid Eulerian (ICE) technique to solve the governing equations. One first presents the differential equations of motion and the solution procedure of the Rice program. Next, a method is described for artificially thickening the combustion zone to dimensions resolvable by the computational mesh. This is done in such a way that the physical flame speed and jump conditions across the flame front are preserved. Finally, the results of two example calculations are presented. In the first, the artificial thickening technique is used to solve a one-dimensional laminar flame problem. In the second, the results of a full two-dimensional calculation of unsteady combustion in two connected chambers are detailed

  14. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  15. The dimension split element-free Galerkin method for three-dimensional potential problems

    Science.gov (United States)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-02-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  16. Histopathological image analysis for centroblasts classification through dimensionality reduction approaches.

    Science.gov (United States)

    Kornaropoulos, Evgenios N; Niazi, M Khalid Khan; Lozanski, Gerard; Gurcan, Metin N

    2014-03-01

    We present two novel automated image analysis methods to differentiate centroblast (CB) cells from noncentroblast (non-CB) cells in digital images of H&E-stained tissues of follicular lymphoma. CB cells are often confused by similar looking cells within the tissue, therefore a system to help their classification is necessary. Our methods extract the discriminatory features of cells by approximating the intrinsic dimensionality from the subspace spanned by CB and non-CB cells. In the first method, discriminatory features are approximated with the help of singular value decomposition (SVD), whereas in the second method they are extracted using Laplacian Eigenmaps. Five hundred high-power field images were extracted from 17 slides, which are then used to compose a database of 213 CB and 234 non-CB region of interest images. The recall, precision, and overall accuracy rates of the developed methods were measured and compared with existing classification methods. Moreover, the reproducibility of both classification methods was also examined. The average values of the overall accuracy were 99.22% ± 0.75% and 99.07% ± 1.53% for COB and CLEM, respectively. The experimental results demonstrate that both proposed methods provide better classification accuracy of CB/non-CB in comparison with the state of the art methods. © 2013 International Society for Advancement of Cytometry.

  17. Comparative proteomic analysis of the ribosomes in 5-fluorouracil resistance of a human colon cancer cell line using the radical-free and highly reducing method of two-dimensional polyacrylamide gel electrophoresis.

    Science.gov (United States)

    Kimura, Kosei; Wada, Akira; Ueta, Masami; Ogata, Akihiko; Tanaka, Satoru; Sakai, Akiko; Yoshida, Hideji; Fushitani, Hideo; Miyamoto, Akiko; Fukushima, Masakazu; Uchiumi, Toshio; Tanigawa, Nobuhiko

    2010-11-01

    Many auxiliary functions of ribosomal proteins (r-proteins) have received considerable attention in recent years. However, human r-proteins have hardly been examined by proteomic analysis. In this study, we isolated ribosomal particles and subsequently compared the proteome of r-proteins between the DLD-1 human colon cancer cell line and its 5-fluorouracil (5-FU)-resistant sub-line, DLD-1/5-FU, using the radical-free and highly reducing method of two-dimensional polyacrylamide gel electrophoresis, which has a superior ability to separate basic proteins, and we discuss the role of r-proteins in 5-FU resistance. Densitometric analysis was performed to quantify modulated proteins, and protein spots showing significant changes were identified by employing matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry. Three basic proteins (L15, L37 and prohibitin) which were significantly modulated between DLD-1 and DLD-1/5-FU were identified. Two proteins, L15 and L37, showed down-regulated expression in DLD-1/5-FU in comparison to DLD-1. Prohibitin, which is not an r-protein and is known to be localized in the mitochondria, showed up-regulated expression in DLD-1/5-FU. These 3 proteins may be related to 5-FU resistance.

  18. Discretization model for nonlinear dynamic analysis of three dimensional structures

    International Nuclear Information System (INIS)

    Hayashi, Y.

    1982-12-01

    A discretization model for nonlinear dynamic analysis of three dimensional structures is presented. The discretization is achieved through a three dimensional spring-mass system and the dynamic response obtained by direct integration of the equations of motion using central diferences. First the viability of the model is verified through the analysis of homogeneous linear structures and then its performance in the analysis of structures subjected to impulsive or impact loads, taking into account both geometrical and physical nonlinearities is evaluated. (Author) [pt

  19. Bibliography of papers, reports, and presentations related to point-sample dimensional measurement methods for machined part evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems

    1996-04-01

    The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.

  20. Three-dimensional analysis of pharyngeal high-resolution manometry data.

    Science.gov (United States)

    Geng, Zhixian; Hoffman, Matthew R; Jones, Corinne A; McCulloch, Timothy M; Jiang, Jack J

    2013-07-01

    High-resolution manometry (HRM) represents a critical advance in the quantification of swallow-related pressure events in the pharynx. Previous analyses of the pressures measured by HRM, though, have been largely two-dimensional, focusing on a single sensor in a given region. We present a three-dimensional approach that combines information from adjacent sensors in a region. Two- and three-dimensional methods were compared for their ability to classify data correctly as normal or disordered. Case series evaluating new method of data analysis. A total of 1,324 swallows from 16 normal subjects and 61 subjects with dysphagia were included. Two-dimensional single sensor integrals of the area under the curves created by rises in pressure in the velopharynx, tongue base, and upper esophageal sphincter (UES) were calculated. Three-dimensional multi-sensor integrals of the volume under all curves corresponding to the same regions were also computed. The two sets of measurements were compared for their ability to classify data correctly as normal or disordered using an artificial neural network (ANN). Three-dimensional parameters yielded a maximal classification accuracy of 86.71% ± 1.47%, while two-dimensional parameters achieved a maximum accuracy of 83.36% ± 1.42%. Combining two- and three-dimensional parameters with all other variables, including three-dimensional parameters, yielded a classification accuracy of 96.99% ± 0.51%. Including two-dimensional parameters yielded a classification accuracy of 96.32% ± 1.05%. Three-dimensional analysis led to improved classification of swallows based on pharyngeal HRM. Artificial neural network performance with both two-dimensional and three-dimensional analyses was effective, classifying a large percentage of swallows correctly, thus demonstrating its potential clinical utility. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  1. Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis

    Science.gov (United States)

    Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca

    2017-11-01

    Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.

  2. Extrudate Expansion Modelling through Dimensional Analysis Method

    DEFF Research Database (Denmark)

    to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...... of the correlation are respectively 5.9% and 9% for the whole wheat flour and the aquatic feed extrusion. An alternative 4-coefficient equation is also suggested from the 3 dimensionless groups. The average deviations of the alternative equation are respectively 5.8% and 2.5% in correlation with the same set...

  3. The development of an evaluation method for capture columns used in two-dimensional liquid chromatography.

    Science.gov (United States)

    Cao, Liwei; Yu, Danhua; Wang, Xinliang; Ke, Yanxiong; Jin, Yu; Liang, Xinmiao

    2011-11-07

    Capture columns are important interface tools for on line two-dimensional liquid chromatography (2D-LC). In this study, a systematic method was developed to evaluate and optimize the capture ability of capture columns by off-line method. First, the parameter Δt(R) (Δt(R)=t(2)-t(1)-t(0)-W) was introduced to quantitatively represent the capture ability of the capture column by connecting a capture column behind the first dimensional column. Based on the value of Δt(R), an appropriate capture column was selected after the first dimensional column was fixed. Then, the capture ability of the selected column was promoted by adjusting the mobile phase of the first dimensional column. Capture ability was also optimized using complex sample analysis software system (CSASS) software. Second, the elution mode of the trapped compounds on the capture column was investigated by connecting the capture column before the second dimensional column. More specifically, in mode I, capture column was connected to the second dimension without changing the flow rate direction and the trapped compounds must pass through the capture column and be eluted into the second dimensional column. The contrary connection mode was mode II. It was found that mode I is more suitable method for 2D-LC. Finally, an off-line reversed-phase/hydrophilic interaction liquid chromatography two-dimensional liquid chromatography (RP/HILIC 2D-LC) system with a C18 capture column was developed to demonstrate the practical application of this method. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Drifting plasmons in open two-dimensional channels: modal analysis

    International Nuclear Information System (INIS)

    Sydoruk, O

    2013-01-01

    Understanding the properties of plasmons in two-dimensional channels is important for developing methods of terahertz generation. This paper presents a modal analysis of plasmonic reflection in open channels supporting dc currents. As it shows, the plasmons can be amplified upon reflection if a dc current flows away from a conducting boundary; de-amplification occurs for the opposite current direction. The problem is solved analytically, based on a perturbation calculation, and numerically, and agreement between the methods is demonstrated. The power radiated by a channel is found to be negligible, and plasmon reflection in open channels is shown to be similar to that in closed channels. Based on this similarity, the oscillator designs developed earlier for closed channels could be applicable also for open ones. The results develop the modal-decomposition technique further as an instrument for the design of terahertz plasmonic sources. (paper)

  5. Standard Test Method for Dimensional Stability of Sandwich Core Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers the determination of the sandwich core dimensional stability in the two plan dimensions. 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  6. Development of three-dimensional individual bubble-velocity measurement method by bubble tracking

    International Nuclear Information System (INIS)

    Kanai, Taizo; Furuya, Masahiro; Arai, Takahiro; Shirakawa, Kenetsu; Nishi, Yoshihisa

    2012-01-01

    A gas-liquid two-phase flow in a large diameter pipe exhibits a three-dimensional flow structure. Wire-Mesh Sensor (WMS) consists of a pair of parallel wire layers located at the cross section of a pipe. Both the parallel wires cross at 90o with a small gap and each intersection acts as an electrode. The WMS allows the measurement of the instantaneous two-dimensional void-fraction distribution over the cross-section of a pipe, based on the difference between the local instantaneous conductivity of the two-phase flow. Furthermore, the WMS can acquire a phasic-velocity on the basis of the time lag of void signals between two sets of WMS. Previously, the acquired phasic velocity was one-dimensional with time-averaged distributions. The authors propose a method to estimate the three-dimensional bubble-velocity individually WMS data. The bubble velocity is determined by the tracing method. In this tracing method, each bubble is separated from WMS signal, volume and center coordinates of the bubble is acquired. Two bubbles with near volume at two WMS are considered as the same bubble and bubble velocity is estimated from the displacement of the center coordinates of the two bubbles. The validity of this method is verified by a swirl flow. The proposed method can successfully visualize a swirl flow structure and the results of this method agree with the results of cross-correlation analysis. (author)

  7. New method of 2-dimensional metrology using mask contouring

    Science.gov (United States)

    Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka

    2008-10-01

    We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.

  8. Dimensional Analysis with space discrimination applied to Fickian difussion phenomena

    International Nuclear Information System (INIS)

    Diaz Sanchidrian, C.; Castans, M.

    1989-01-01

    Dimensional Analysis with space discrimination is applied to Fickian difussion phenomena in order to transform its partial differen-tial equations into ordinary ones, and also to obtain in a dimensionl-ess fom the Ficks second law. (Author)

  9. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  10. A Comparison of Different Dimensionality Reduction and Feature Selection Methods for Single Trial ERP Detection

    Science.gov (United States)

    Lan, Tian; Erdogmus, Deniz; Black, Lois; Van Santen, Jan

    2014-01-01

    Dimensionality reduction and feature selection is an important aspect of electroencephalography based event related potential detection systems such as brain computer interfaces. In our study, a predefined sequence of letters was presented to subjects in a Rapid Serial Visual Presentation (RSVP) paradigm. EEG data were collected and analyzed offline. A linear discriminant analysis (LDA) classifier was designed as the ERP (Event Related Potential) detector for its simplicity. Different dimensionality reduction and feature selection methods were applied and compared in a greedy wrapper framework. Experimental results showed that PCA with the first 10 principal components for each channel performed best and could be used in both online and offline systems. PMID:21097171

  11. A comparison of different dimensionality reduction and feature selection methods for single trial ERP detection.

    Science.gov (United States)

    Lan, Tian; Erdogmus, Deniz; Black, Lois; Van Santen, Jan

    2010-01-01

    Dimensionality reduction and feature selection is an important aspect of electroencephalography based event related potential detection systems such as brain computer interfaces. In our study, a predefined sequence of letters was presented to subjects in a Rapid Serial Visual Presentation (RSVP) paradigm. EEG data were collected and analyzed offline. A linear discriminant analysis (LDA) classifier was designed as the ERP (Event Related Potential) detector for its simplicity. Different dimensionality reduction and feature selection methods were applied and compared in a greedy wrapper framework. Experimental results showed that PCA with the first 10 principal components for each channel performed best and could be used in both online and offline systems.

  12. Methods for RNA Analysis

    DEFF Research Database (Denmark)

    Olivarius, Signe

    RNAs rely on interactions with proteins, the establishment of protein-binding profiles is essential for the characterization of RNAs. Aiming to facilitate RNA analysis, this thesis introduces proteomics- as well as transcriptomics-based methods for the functional characterization of RNA. First, RNA......-protein pulldown combined with mass spectrometry analysis is applied for in vivo as well as in vitro identification of RNA-binding proteins, the latter succeeding in verifying known RNA-protein interactions. Secondly, acknowledging the significance of flexible promoter usage for the diversification...... of the transcriptome, 5’ end capture of RNA is combined with next-generation sequencing for high-throughput quantitative assessment of transcription start sites by two different methods. The methods presented here allow for functional investigation of coding as well as noncoding RNA and contribute to future...

  13. ABOUT SOLUTION OF MULTIPOINT BOUNDARY PROBLEMS OF TWO-DIMENSIONAL STRUCTURAL ANALYSIS WITH THE USE OF COMBINED APPLICATION OF FINITE ELEMENT METHOD AND DISCRETE-CONTINUAL FINITE ELEMENT METHOD PART 2: SPECIAL ASPECTS OF FINITE ELEMENT APPROXIMATION

    Directory of Open Access Journals (Sweden)

    Pavel A. Akimov

    2017-12-01

    Full Text Available As is well known, the formulation of a multipoint boundary problem involves three main components: a description of the domain occupied by the structure and the corresponding subdomains; description of the conditions inside the domain and inside the corresponding subdomains, the description of the conditions on the boundary of the domain, conditions on the boundaries between subdomains. This paper is a continuation of another work published earlier, in which the formulation and general principles of the approximation of the multipoint boundary problem of a static analysis of deep beam on the basis of the joint application of the finite element method and the discrete-continual finite element method were considered. It should be noted that the approximation within the fragments of a domain that have regular physical-geometric parameters along one of the directions is expedient to be carried out on the basis of the discrete-continual finite element method (DCFEM, and for the approximation of all other fragments it is necessary to use the standard finite element method (FEM. In the present publication, the formulas for the computing of displacements partial derivatives of displacements, strains and stresses within the finite element model (both within the finite element and the corresponding nodal values (with the use of averaging are presented. Boundary conditions between subdomains (respectively, discrete models and discrete-continual models and typical conditions such as “hinged support”, “free edge”, “perfect contact” (twelve basic (basic variants are available are under consideration as well. Governing formulas for computing of elements of the corresponding matrices of coefficients and vectors of the right-hand sides are given for each variant. All formulas are fully adapted for algorithmic implementation.

  14. Digital shearing method for three-dimensional data extraction in holographic particle image velocimetry

    Science.gov (United States)

    Yang, Hui; Halliwell, Neil; Coupland, Jeremy

    2003-11-01

    We report a new digital shearing method for extracting the three-dimensional displacement vector data from double-exposure holograms. With this method we can manipulate both the phase and the amplitude of the recorded signal, which, like optical correlation analysis, is inherently immune to imaging aberration. However, digital shearing is not a direct digital implementation of optical correlation, and a considerable saving in computation time results. We demonstrate the power of the method by MATLAB simulation and discuss its performance with reference to optical analysis.

  15. Three-dimensional analysis of craniofacial bones using three-dimensional computer tomography

    International Nuclear Information System (INIS)

    Ono, Ichiro; Ohura, Takehiko; Kimura, Chu

    1989-01-01

    Three-dimensional computer tomography (3DCT) was performed in patients with various diseases to visualize stereoscopically the deformity of the craniofacial bones. The data obtained were analyzed by the 3DCT analyzing system. A new coordinate system was established using the median sagittal plane of the face (a plane passing through sella, nasion and basion) on the three-dimensional image. Three-dimensional profilograms were prepared for detailed analysis of the deformation of craniofacial bones for cleft lip and palate, mandibular prognathia and hemifacial microsomia. For patients, asymmetry in the frontal view and twist-formed complicated deformities were observed, as well as deformity of profiles in the anteroposterior and up-and-down directions. A newly developed technique allows three-dimensional visualization of changes in craniofacial deformity. It would aid in determining surgical strategy, including crani-facial surgery and maxillo-facial surgery, and in evaluating surgical outcome. (N.K.)

  16. Three-dimensional analysis of craniofacial bones using three-dimensional computer tomography

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Ichiro; Ohura, Takehiko; Kimura, Chu (Hokkaido Univ., Sapporo (Japan). School of Medicine) (and others)

    1989-08-01

    Three-dimensional computer tomography (3DCT) was performed in patients with various diseases to visualize stereoscopically the deformity of the craniofacial bones. The data obtained were analyzed by the 3DCT analyzing system. A new coordinate system was established using the median sagittal plane of the face (a plane passing through sella, nasion and basion) on the three-dimensional image. Three-dimensional profilograms were prepared for detailed analysis of the deformation of craniofacial bones for cleft lip and palate, mandibular prognathia and hemifacial microsomia. For patients, asymmetry in the frontal view and twist-formed complicated deformities were observed, as well as deformity of profiles in the anteroposterior and up-and-down directions. A newly developed technique allows three-dimensional visualization of changes in craniofacial deformity. It would aid in determining surgical strategy, including crani-facial surgery and maxillo-facial surgery, and in evaluating surgical outcome. (N.K.).

  17. Three-dimensional window analysis for detecting positive selection at structural regions of proteins.

    Science.gov (United States)

    Suzuki, Yoshiyuki

    2004-12-01

    Detection of natural selection operating at the amino acid sequence level is important in the study of molecular evolution. Single-site analysis and one-dimensional window analysis can be used to detect selection when the biological functions of amino acid sites are unknown. Single-site analysis is useful when selection operates more or less constantly over evolutionary time, but less so when selection operates temporarily. One-dimensional window analysis is more sensitive than single-site analysis when the functions of amino acid sites in close proximity in the linear sequence are similar, although this is not always the case. Here I present a three-dimensional window analysis method for detecting selection given the three-dimensional structure of the protein of interest. In the three-dimensional structure, the window is defined as the sphere centered on the alpha-carbon of an amino acid site. The window size is the radius of the sphere. The sites whose alpha-carbons are included in the window are grouped for the neutrality test. The window is moved within the three-dimensional structure by sequentially moving the central site along the primary amino acid sequence. To detect positive selection, it may also be useful to group the surface-exposed sites in the window separately. Three-dimensional window analysis appears not only to be more sensitive than single-site analysis and one-dimensional window analysis but also to provide similar specificity for inferring positive selection in the analyses of the hemagglutinin and neuraminidase genes of human influenza A viruses. This method, however, may fail to detect selection when it operates only on a particular site, in which case single-site analysis may be preferred, although a large number of sequences is required.

  18. Dimensional analysis system for fuel elements experience in hot cells plate format

    International Nuclear Information System (INIS)

    Carneiro, Orozimbo J.; Dutra Neto, Aimore; Campos, Tarcisio P.R.; Dias, Ailton F.

    2002-01-01

    This paper describes a system for visual and dimensional analysis of compact-core reactors fuel elements in plate format. The system, composed of a co-ordinated x, y, z computerized table, has to be operative inside of a hot cell for the visual inspection and dimensional measurements of the post irradiated fuel elements. The control method of the x, y, z table axes, the data acquisition and the process control technique using computer, are described. Experimental data handling and the expected future of the project are presented and commented. This work expand previous investigations on a dimensional analysis system carried out by Brazilian Navy Technological Center in Sao Paulo (CTMSP). (author)

  19. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  20. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  1. Dimensional analysis, falling bodies, and the fine art of not solving differential equations

    Science.gov (United States)

    Bohren, Craig F.

    2004-04-01

    Dimensional analysis is a simple, physically transparent and intuitive method for obtaining approximate solutions to physics problems, especially in mechanics. It may-indeed sometimes should-precede or even supplant mathematical analysis. And yet dimensional analysis usually is given short shrift in physics textbooks, presented mostly as a diagnostic tool for finding errors in solutions rather than in finding solutions in the first place. Dimensional analysis is especially well suited to estimating the magnitude of errors associated with the inevitable simplifying assumptions in physics problems. For example, dimensional arguments quickly yield estimates for the errors in the simple expression 2h/g for the descent time of a body dropped from a height h on a spherical, rotating planet with an atmosphere as a consequence of ignoring the variation of the acceleration due to gravity g with height, rotation, relativity, and atmospheric drag.

  2. New method of three-dimensional reconstruction from two-dimensional MR data sets

    International Nuclear Information System (INIS)

    Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.

    1989-01-01

    In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations

  3. Three-dimensional analysis of mandibular growth and tooth eruption

    DEFF Research Database (Denmark)

    Krarup, S.; Darvann, Tron Andre; Larsen, Per

    2005-01-01

    Normal and abnormal jaw growth and tooth eruption are topics of great importance for several dental and medical disciplines. Thus far, clinical studies on these topics have used two-dimensional (2D) radiographic techniques. The purpose of the present study was to analyse normal mandibular growth...... and tooth eruption in three dimensions based on computer tomography (CT) scans, extending the principles of mandibular growth analysis proposed by Bjork in 1969 from two to three dimensions. As longitudinal CT data from normal children are not available (for ethical reasons), CT data from children......, relocated laterally during growth. Furthermore, the position of tooth buds remained relatively stable inside the jaw until root formation started. Eruption paths of canines and premolars were vertical, whereas molars erupted in a lingual direction. The 3D method would seem to offer new insight into jaw...

  4. Modeling of three-dimensional diffusible resistors with the one-dimensional tube multiplexing method

    International Nuclear Information System (INIS)

    Gillet, Jean-Numa; Degorce, Jean-Yves; Meunier, Michel

    2009-01-01

    Electronic-behavior modeling of three-dimensional (3D) p + -π-p + and n + -ν-n + semiconducting diffusible devices with highly accurate resistances for the design of analog resistors, which are compatible with the CMOS (complementary-metal-oxide-semiconductor) technologies, is performed in three dimensions with the fast tube multiplexing method (TMM). The current–voltage (I–V) curve of a silicon device is usually computed with traditional device simulators of technology computer-aided design (TCAD) based on the finite-element method (FEM). However, for the design of 3D p + -π-p + and n + -ν-n + diffusible resistors, they show a high computational cost and convergence that may fail with fully non-separable 3D dopant concentration profiles as observed in many diffusible resistors resulting from laser trimming. These problems are avoided with the proposed TMM, which divides the 3D resistor into one-dimensional (1D) thin tubes with longitudinal axes following the main orientation of the average electrical field in the tubes. The I–V curve is rapidly obtained for a device with a realistic 3D dopant profile, since a system of three first-order ordinary differential equations has to be solved for each 1D multiplexed tube with the TMM instead of three second-order partial differential equations in the traditional TCADs. Simulations with the TMM are successfully compared to experimental results from silicon-based 3D resistors fabricated by laser-induced dopant diffusion in the gaps of MOSFETs (metal-oxide-semiconductor field-effect transistors) without initial gate. Using thin tubes with other shapes than parallelepipeds as ring segments with toroidal lateral surfaces, the TMM can be generalized to electronic devices with other types of 3D diffusible microstructures

  5. Exact rebinning methods for three-dimensional PET.

    Science.gov (United States)

    Liu, X; Defrise, M; Michel, C; Sibomana, M; Comtat, C; Kinahan, P; Townsend, D

    1999-08-01

    The high computational cost of data processing in volume PET imaging is still hindering the routine application of this successful technique, especially in the case of dynamic studies. This paper describes two new algorithms based on an exact rebinning equation, which can be applied to accelerate the processing of three-dimensional (3-D) PET data. The first algorithm, FOREPROJ, is a fast-forward projection algorithm that allows calculation of the 3-D attenuation correction factors (ACF's) directly from a two-dimensional (2-D) transmission scan, without first reconstructing the attenuation map and then performing a 3-D forward projection. The use of FOREPROJ speeds up the estimation of the 3-D ACF's by more than a factor five. The second algorithm, FOREX, is a rebinning algorithm that is also more than five times faster, compared to the standard reprojection algorithm (3DRP) and does not suffer from the image distortions generated by the even faster approximate Fourier rebinning (FORE) method at large axial apertures. However, FOREX is probably not required by most existing scanners, as the axial apertures are not large enough to show improvements over FORE with clinical data. Both algorithms have been implemented and applied to data simulated for a scanner with a large axial aperture (30 degrees), and also to data acquired with the ECAT HR and the ECAT HR+ scanners. Results demonstrate the excellent accuracy achieved by these algorithms and the important speedup when the sinogram sizes are powers of two.

  6. LDSScanner: Exploratory Analysis of Low-Dimensional Structures in High-Dimensional Datasets.

    Science.gov (United States)

    Xia, Jiazhi; Ye, Fenjin; Chen, Wei; Wang, Yusi; Chen, Weifeng; Ma, Yuxin; Tung, Anthony K H

    2018-01-01

    Many approaches for analyzing a high-dimensional dataset assume that the dataset contains specific structures, e.g., clusters in linear subspaces or non-linear manifolds. This yields a trial-and-error process to verify the appropriate model and parameters. This paper contributes an exploratory interface that supports visual identification of low-dimensional structures in a high-dimensional dataset, and facilitates the optimized selection of data models and configurations. Our key idea is to abstract a set of global and local feature descriptors from the neighborhood graph-based representation of the latent low-dimensional structure, such as pairwise geodesic distance (GD) among points and pairwise local tangent space divergence (LTSD) among pointwise local tangent spaces (LTS). We propose a new LTSD-GD view, which is constructed by mapping LTSD and GD to the axis and axis using 1D multidimensional scaling, respectively. Unlike traditional dimensionality reduction methods that preserve various kinds of distances among points, the LTSD-GD view presents the distribution of pointwise LTS ( axis) and the variation of LTS in structures (the combination of axis and axis). We design and implement a suite of visual tools for navigating and reasoning about intrinsic structures of a high-dimensional dataset. Three case studies verify the effectiveness of our approach.

  7. Deriving Shape-Based Features for C. elegans Locomotion Using Dimensionality Reduction Methods.

    Science.gov (United States)

    Gyenes, Bertalan; Brown, André E X

    2016-01-01

    High-throughput analysis of animal behavior is increasingly common following the advances of recording technology, leading to large high-dimensional data sets. This dimensionality can sometimes be reduced while still retaining relevant information. In the case of the nematode worm Caenorhabditis elegans, more than 90% of the shape variance can be captured using just four principal components. However, it remains unclear if other methods can achieve a more compact representation or contribute further biological insight to worm locomotion. Here we take a data-driven approach to worm shape analysis using independent component analysis (ICA), non-negative matrix factorization (NMF), a cosine series, and jPCA (a dynamic variant of principal component analysis [PCA]) and confirm that the dimensionality of worm shape space is close to four. Projecting worm shapes onto the bases derived using each method gives interpretable features ranging from head movements to tail oscillation. We use these as a comparison method to find differences between the wild type N2 worms and various mutants. For example, we find that the neuropeptide mutant nlp-1(ok1469) has an exaggerated head movement suggesting a mode of action for the previously described increased turning rate. The different bases provide complementary views of worm behavior and we expect that closer examination of the time series of projected amplitudes will lead to new results in the future.

  8. Deriving shape-based features for C. elegans locomotion using dimensionality reduction methods

    Directory of Open Access Journals (Sweden)

    Bertalan Gyenes

    2016-08-01

    Full Text Available High-throughput analysis of animal behavior is increasingly common following advances of recording technology, leading to large high-dimensional data sets. This dimensionality can sometimes be reduced while still retaining relevant information. In the case of the nematode worm Caenorhabditis elegans, more than 90% of the shape variance can be captured using just four principal components. However, it remains unclear if other methods can achieve a more compact representation or contribute further biological insight to worm locomotion. Here we take a data-driven approach to worm shape analysis using independent component analysis (ICA, non-negative matrix factorization (NMF, a cosine series, and jPCA (a dynamic variant of principal component analysis and confirm that the dimensionality of worm shape space is close to four. Projecting worm shapes onto the bases derived using each method gives interpretable features ranging from head movements to tail oscillation. We use these as a comparison method to find differences between the wild type N2 worms and various mutants. For example, we find that the neuropeptide mutant nlp-1(ok1469 has an exaggerated head movement suggesting a mode of action for the previously described increased turning rate. The different bases provide complementary views of worm behavior and we expect that closer examination of the time series of projected amplitudes will lead to new results in the future.

  9. CFD three dimensional wake analysis in complex terrain

    Science.gov (United States)

    Castellani, F.; Astolfi, D.; Terzi, L.

    2017-11-01

    Even if wind energy technology is nowadays fully developed, the use of wind energy in very complex terrain is still challenging. In particular, it is challenging to characterize the combination effects of wind ow over complex terrain and wake interactions between nearby turbines and this has a practical relevance too, for the perspective of mitigating anomalous vibrations and loads as well improving the farm efficiency. In this work, a very complex terrain site has been analyzed through a Reynolds-averaged CFD (Computational Fluid Dynamics) numerical wind field model; in the simulation the inuence of wakes has been included through the Actuator Disk (AD) approach. In particular, the upstream turbine of a cluster of 4 wind turbines having 2.3 MW of rated power is studied. The objective of this study is investigating the full three-dimensional wind field and the impact of three-dimensionality on the evolution of the waked area between nearby turbines. A post-processing method of the output of the CFD simulation is developed and this allows to estimate the wake lateral deviation and the wake width. The reliability of the numerical approach is inspired by and crosschecked through the analysis of the operational SCADA (Supervisory Control and Data Acquisition) data of the cluster of interest.

  10. The value of preoperative 3-dimensional over 2-dimensional valve analysis in predicting recurrent ischemic mitral regurgitation after mitral annuloplasty

    NARCIS (Netherlands)

    Wijdh-den Hamer, Inez J.; Bouma, Wobbe; Lai, Eric K.; Levack, Melissa M.; Shang, Eric K.; Pouch, Alison M.; Eperjesi, Thomas J.; Plappert, Theodore J.; Yushkevich, Paul A.; Hung, Judy; Mariani, Massimo A.; Khabbaz, Kamal R.; Gleason, Thomas G.; Mahmood, Feroze; Acker, Michael A.; Woo, Y. Joseph; Cheung, Albert T.; Gillespie, Matthew J.; Jackson, Benjamin M.; Gorman, Joseph H.; Gorman, Robert C.

    Objectives: Repair for ischemic mitral regurgitation with undersized annuloplasty is characterized by high recurrence rates. We sought to determine the value of pre-repair 3-dimensional echocardiography over 2-dimensional echocardiography in predicting recurrence at 6 months. Methods: Intraoperative

  11. The Use of Iterative Methods to Solve Two-Dimensional Nonlinear Volterra-Fredholm Integro-Differential Equations

    Directory of Open Access Journals (Sweden)

    shadan sadigh behzadi

    2012-03-01

    Full Text Available In this present paper, we solve a two-dimensional nonlinear Volterra-Fredholm integro-differential equation by using the following powerful, efficient but simple methods: (i Modified Adomian decomposition method (MADM, (ii Variational iteration method (VIM, (iii Homotopy analysis method (HAM and (iv Modified homotopy perturbation method (MHPM. The uniqueness of the solution and the convergence of the proposed methods are proved in detail. Numerical examples are studied to demonstrate the accuracy of the presented methods.

  12. Robust Adaptive Lasso method for parameter?s estimation and variable selection in high-dimensional sparse models

    OpenAIRE

    Wahid, Abdul; Khan, Dost Muhammad; Hussain, Ijaz

    2017-01-01

    High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL) method is proposed which is based on pearson residuals weighting scheme. The weight f...

  13. Testing the numerical method for one-dimensional shock treatment

    International Nuclear Information System (INIS)

    Horvat, A.

    1998-01-01

    In the early 80's the SMUP computer code was developed at the Jozef Stefan Institute for simulation of two-phase flow in steam generators. It was suitable only for steady-state problems and was unable to simulate transient behavior. In this paper, efforts are presented to find suitable numerical method to renew the old SMUP computer code. The obsolete numerical code has to be replaced with a more efficient one that would be able to treat time-dependent problems. It also has to ensure accurate solution during shock propagation. One-dimensional shock propagation in a tube were studied at zero viscosity. To simplify the equation of state the ideal gas was chosen as a working fluid. Stability margins in the form of transport matrix eigenvalues were calculated. Results were found to be close to those already published.(author)

  14. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  15. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  16. Rotational Linear Discriminant Analysis Using Bayes Rule for Dimensionality Reduction

    OpenAIRE

    Alok Sharma; Kuldip K. Paliwal

    2006-01-01

    Linear discriminant analysis (LDA) finds an orientation that projects high dimensional feature vectors to reduced dimensional feature space in such a way that the overlapping between the classes in this feature space is minimum. This overlapping is usually finite and produces finite classification error which is further minimized by rotational LDA technique. This rotational LDA technique rotates the classes individually in the original feature space in a manner that enables further reduction ...

  17. Comparison of two three-dimensional cephalometric analysis computer software

    OpenAIRE

    Sawchuk, Dena; Alhadlaq, Adel; Alkhadra, Thamer; Carlyle, Terry D; Kusnoto, Budi; El-Bialy, Tarek

    2014-01-01

    Background: Three-dimensional cephalometric analyses are getting more attraction in orthodontics. The aim of this study was to compare two softwares to evaluate three-dimensional cephalometric analyses of orthodontic treatment outcomes. Materials and Methods: Twenty cone beam computed tomography images were obtained using i-CAT® imaging system from patient's records as part of their regular orthodontic records. The images were analyzed using InVivoDental5.0 (Anatomage Inc.) and 3DCeph™ (Unive...

  18. Methods for RNA Analysis

    DEFF Research Database (Denmark)

    Olivarius, Signe

    While increasing evidence appoints diverse types of RNA as key players in the regulatory networks underlying cellular differentiation and metabolism, the potential functions of thousands of conserved RNA structures encoded in mammalian genomes remain to be determined. Since the functions of most...... RNAs rely on interactions with proteins, the establishment of protein-binding profiles is essential for the characterization of RNAs. Aiming to facilitate RNA analysis, this thesis introduces proteomics- as well as transcriptomics-based methods for the functional characterization of RNA. First, RNA......-protein pulldown combined with mass spectrometry analysis is applied for in vivo as well as in vitro identification of RNA-binding proteins, the latter succeeding in verifying known RNA-protein interactions. Secondly, acknowledging the significance of flexible promoter usage for the diversification...

  19. [Snoring analysis methods].

    Science.gov (United States)

    Fiz Fernández, José Antonio; Solà Soler, Jordi; Jané Campos, Raimon

    2011-06-11

    Snore is a breathing sound that is originated during sleep, either nocturnal or diurnal. Snoring may be inspiratory, expiratory or it may occupy the whole breathing cycle. It is caused by the vibrations of the different tissues of the upper airway. Many procedures have been used to analyze it, from simple interrogation, to standardized questionnaires, to more sophisticated acoustic methods developed thanks to the advance of biomedical techniques in the last years. The present work describes the current state of the art of snoring analysis procedures. Copyright © 2010 Elsevier España, S.L. All rights reserved.

  20. A method for transient, three-dimensional neutron transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Waddell, M.W. Jr. (Oak Ridge Y-12 Plant, TN (United States)); Dodds, H.L. (Tennessee Univ., Knoxville, TN (United States))

    1992-12-28

    This paper describes the development and evaluation of a method for solving the time-dependent, three-dimensional Boltzmann transport model with explicit representation of delayed neutrons. A hybrid stochastic/deterministic technique is utilized with a Monte Carlo code embedded inside of a quasi-static kinetics framework. The time-dependent flux amplitude, which is usually fast varying, is computed deterministically by a conventional point kinetics algorithm. The point kinetics parameters, reactivity and generation time as well as the flux shape, which is usually slowly varying in time, are computed stochastically during the random walk of the Monte Carlo calculation. To verify the accuracy of this new method, several computational benchmark problems from the Argonne National Laboratory benchmark book, ANL-7416, were calculated. The results are shown to be in reasonably good agreement with other independently obtained solutions. The results obtained in this work indicate that the method/code is working properly and that it is economically feasible for many practical applications provided a dedicated high performance workstation is available.

  1. A method for transient, three-dimensional neutron transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Waddell, M.W. Jr. (Martin Marietta Energy Systems, Inc. (United States)); Dodds, H.L. (Univ. of Tennessee (United States))

    1993-04-01

    This paper describes the development and evaluation of a method for solving the time-dependent, three-dimensional Boltzmann transport model with explicit representation of delayed neutrons. A hybrid stochastic/deterministic technique is utilized with a Monte Carlo code embedded inside of a quasi-static kinetics framework. The time-dependent flux amplitude, which is usually fast varying, is computed deterministically by a conventional point kinetics algorithm. The point kinetics parameters, reactivity and generation time as well as the flux shape, which is usually slowly varying in time, are computed stochastically during the random walk of the Monte Carlo calculation. To verify the accuracy of this new method, several computational benchmark problems from the Argonne National Laboratory benchmark book, ANL-7416, were calculated. The results are shown to be in reasonably good agreement with other independently obtained solutions. The results obtained in this work indicate that the method/code is working properly and that it is economically feasible for many practical applications provided a dedicated high performance workstation is available. (orig.)

  2. Assessment of altered three-dimensional blood characteristics in aortic disease by velocity distribution analysis

    NARCIS (Netherlands)

    Garcia, Julio; Barker, Alex J.; van Ooij, Pim; Schnell, Susanne; Puthumana, Jyothy; Bonow, Robert O.; Collins, Jeremy D.; Carr, James C.; Markl, Michael

    2015-01-01

    PurposeTo test the feasibility of velocity distribution analysis for identifying altered three-dimensional (3D) flow characteristics in patients with aortic disease based on 4D flow MRI volumetric analysis. MethodsForty patients with aortic (Ao) dilation (mid ascending aortic diameter MAA=407 mm,

  3. Visual Analysis of Multi-Dimensional Categorical Data Sets

    NARCIS (Netherlands)

    Broeksema, Bertjan; Telea, Alexandru C.; Baudel, Thomas

    2013-01-01

    We present a set of interactive techniques for the visual analysis of multi-dimensional categorical data. Our approach is based on multiple correspondence analysis (MCA), which allows one to analyse relationships, patterns, trends and outliers among dependent categorical variables. We use MCA as a

  4. Three-dimensional free vibration analysis of thick laminated circular ...

    African Journals Online (AJOL)

    Three-dimensional free vibration analysis of thick laminated circular plates. Sumit Khare, N.D. Mittal. Abstract. In this communication, a numerical analysis regarding free vibration of thick laminated circular plates, having free, clamped as well as simply-supported boundary conditions at outer edges of plates is presented.

  5. Establishing two-dimensional gels for the analysis of Leishmania proteomes.

    Science.gov (United States)

    Acestor, Nathalie; Masina, Slavica; Walker, John; Saravia, Nancy G; Fasel, Nicolas; Quadroni, Manfredo

    2002-07-01

    Several different sample preparation methods for two-dimensional electrophoresis (2-DE) analysis of Leishmania parasites were compared. From this work, we were able to identify a solubilization method using Nonidet P-40 as detergent, which was simple to follow, and which produced 2-DE gels of high resolution and reproducibility.

  6. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments

    NARCIS (Netherlands)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode,

  7. Accuracy of three-dimensional seismic ground response analysis in time domain using nonlinear numerical simulations

    Science.gov (United States)

    Liang, Fayun; Chen, Haibing; Huang, Maosong

    2017-07-01

    To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.

  8. The Use of Iterative Methods to Solve Two-Dimensional Nonlinear Volterra-Fredholm Integro-Differential Equations

    OpenAIRE

    shadan sadigh behzadi

    2012-01-01

    In this present paper, we solve a two-dimensional nonlinear Volterra-Fredholm integro-differential equation by using the following powerful, efficient but simple methods: (i) Modified Adomian decomposition method (MADM), (ii) Variational iteration method (VIM), (iii) Homotopy analysis method (HAM) and (iv) Modified homotopy perturbation method (MHPM). The uniqueness of the solution and the convergence of the proposed methods are proved in detail. Numerical examples are studied to demonstrate ...

  9. Three-dimensional model analysis and processing

    CERN Document Server

    Yu, Faxin; Luo, Hao; Wang, Pinghui

    2011-01-01

    This book focuses on five hot research directions in 3D model analysis and processing in computer science:  compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.

  10. 2-dimensional steam explosion analysis code development

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Song, J. H.; Kim, J. H.; Hong, S. W.; Min, B. T.; Kim, H. Y.; Kim, H. D.

    2003-12-01

    One technique or concept that is considered for the purpose of analyzing and simulating the process of fuel-coolant interaction is the implementation of the Lagrangian-Eulerian fields for the discrete molten fuel particles and the vapor-liquid coolant mixture. One example of the computer code that employs such technique is TEXAS-V. Unfortunately, while TEXAS-V has been used with the different degree of success for many simulations, it has one distinct disadvantage in that it can be used for the one-dimensional system. Therefore, it lacks the ability to describe the transient that occurs in the direction normal to its main direction. Since the system of an actual interest is more possible to be two or three dimensions, the applicability of TEXAS-V is quite limited in this regard. The first stage in the development of LeSiM, the modules for simulating the movements of the Lagrangian particles and their interactions with the surrounding fluid, has been concluded. The development is now moved to implement the developed modules with a fluid dynamic code. For this purpose, the fluid dynamic code K-FIX was chosen. The objectives at this stage are to verify the feasibility of extending the fluid dynamic code with LeSiM and to test the capability of the extended code in simulating the interactions between the Lagrangian particles and the dynamic fluid. This report documents the information on the modification of K-FIX in order to accommodate the extension with LeSiM. It also provides the sample input files for the calculation, the examples of the simulations and their results. The manual is intended as the supplement on the manuals for K-FIX and LeSiM. Therefore, it should be consulted together with these two volumes

  11. Methods for geochemical analysis

    Science.gov (United States)

    Baedecker, Philip A.

    1987-01-01

    The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.

  12. Analysis and validation of carbohydrate three-dimensional structures

    International Nuclear Information System (INIS)

    Lütteke, Thomas

    2009-01-01

    The article summarizes the information that is gained from and the errors that are found in carbohydrate structures in the Protein Data Bank. Validation tools that can locate these errors are described. Knowledge of the three-dimensional structures of the carbohydrate molecules is indispensable for a full understanding of the molecular processes in which carbohydrates are involved, such as protein glycosylation or protein–carbohydrate interactions. The Protein Data Bank (PDB) is a valuable resource for three-dimensional structural information on glycoproteins and protein–carbohydrate complexes. Unfortunately, many carbohydrate moieties in the PDB contain inconsistencies or errors. This article gives an overview of the information that can be obtained from individual PDB entries and from statistical analyses of sets of three-dimensional structures, of typical problems that arise during the analysis of carbohydrate three-dimensional structures and of the validation tools that are currently available to scientists to evaluate the quality of these structures

  13. Methodology for dimensional variation analysis of ITER integrated systems

    Energy Technology Data Exchange (ETDEWEB)

    Fuentes, F. Javier, E-mail: FranciscoJavier.Fuentes@iter.org [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France); Trouvé, Vincent [Assystem Engineering & Operation Services, rue J-M Jacquard CS 60117, 84120 Pertuis (France); Cordier, Jean-Jacques; Reich, Jens [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France)

    2016-11-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  14. Methodology for dimensional variation analysis of ITER integrated systems

    International Nuclear Information System (INIS)

    Fuentes, F. Javier; Trouvé, Vincent; Cordier, Jean-Jacques; Reich, Jens

    2016-01-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  15. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  16. Analysis of chaos in high-dimensional wind power system

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  17. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  18. Dimensionality

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1983-01-01

    The role played by the dimensions of space and space-time in determining the form of various physical laws and constants of Nature is examined. Low dimensional manifolds are also seen to possess special mathematical properties. The concept of fractal dimension is introduced and the recent renaissance of Kaluza-Klein theories obtained by dimensional reduction from higher dimensional gravity or supergravity theories is discussed. A formulation of the anthropic principle is suggested. (author)

  19. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  20. Two-dimensional isostatic meshes in the finite element method

    OpenAIRE

    Martínez Marín, Rubén; Samartín, Avelino

    2002-01-01

    In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's...

  1. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  2. Spectral dimensionality reduction based on intergrated bispectrum phase for hyperspectral image analysis

    Science.gov (United States)

    Saipullah, Khairul Muzzammil; Kim, Deok-Hwan

    2011-11-01

    In this paper, we propose a method to reduce spectral dimension based on the phase of integrated bispectrum. Because of the excellent and robust information extracted from the bispectrum, the proposed method can achieve high spectral classification accuracy even with low dimensional feature. The classification accuracy of bispectrum with one dimensional feature is 98.8%, whereas those of principle component analysis (PCA) and independent component analysis (ICA) are 41.2% and 63.9%, respectively. The unsupervised segmentation accuracy of bispectrum is also 20% and 40% greater than those of PCA and ICA, respectively.

  3. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  4. An extended method for obtaining S-boxes based on three-dimensional chaotic Baker maps

    International Nuclear Information System (INIS)

    Chen Guo; Chen Yong; Liao Xiaofeng

    2007-01-01

    Tang et al. proposed a novel method for obtaining S-boxes based on the well-known two-dimensional chaotic Baker map. Unfortunately, some mistakes exist in their paper. The faults are corrected first in this paper and then an extended method is put forward for acquiring cryptographically strong S-boxes. The new scheme employs a three-dimensional chaotic Baker map, which has more intensive chaotic characters than the two-dimensional one. In addition, the cryptographic properties such as the bijective property, the nonlinearity, the strict avalanche criterion, the output bits independence criterion and the equiprobable input/output XOR distribution are analyzed in detail for our S-box and revised Tang et al.'s one, respectively. The results of numerical analysis show that both of the two boxes can resist several attacks effectively and the three-dimensional chaotic map, a stronger sense in chaotic characters, can perform more smartly and more efficiently in designing S-boxes

  5. DMLLE: a large-scale dimensionality reduction method for detection of polyps in CT colonography

    Science.gov (United States)

    Wang, Shijun; Yao, Jianhua; Summers, Ronald M.

    2008-03-01

    Computer-aided diagnosis systems have been shown to be feasible for polyp detection on computed tomography (CT) scans. After 3-D image segmentation and feature extraction, the dataset of colonic polyp candidates has large-scale and high dimension characteristics. In this paper, we propose a large-scale dimensionality reduction method based on Diffusion Map and Locally Linear Embedding for detection of polyps in CT colonography. By selecting partial data as landmarks, we first map the landmarks into a low dimensional embedding space using Diffusion Map. Then by using Locally Linear Embedding algorithm, non-landmark samples are embedded into the same low dimensional space according to their nearest landmark samples. The local geometry of samples is preserved in both the original space and the embedding space. We applied the proposed method called DMLLE to a colonic polyp dataset which contains 58336 candidates (including 85 6-9mm true polyps) with 155 features. Visual inspection shows that true polyps with similar shapes are mapped to close vicinity in the low dimensional space. FROC analysis shows that SVM with DMLLE achieves higher sensitivity with lower false positives per patient than that of SVM using all features. At the false positives of 8 per patient, SVM with DMLLE improves the average sensitivity from 64% to 75% for polyps whose sizes are in the range from 6 mm to 9 mm (p < 0.05). This higher sensitivity is comparable to unaided readings by trained radiologists.

  6. Three Dimensional Analysis of Elastic Rocket and Launcher at Launching

    Science.gov (United States)

    Takeuchi, Shinsuke

    In this paper, a three-dimensional analysis of launching dynamics of a sounding rocket is investigated. In the analysis, the elastic vibration of the vehicle and launcher is considered. To estimate a trajectory dispersion including the effect of elasticity of the vehicle and launcher, a three-dimensional numerical simulation of a launch is performed. The accuracy of the numerical simulation is discussed and it is concluded that the simulation can estimate the maximum value of the trajectory dispersion properly. After that, the maximum value is estimated for the actual sounding rocket and the value is shown to be within the safty margin for this particular case.

  7. Mathematics Competency for Beginning Chemistry Students Through Dimensional Analysis.

    Science.gov (United States)

    Pursell, David P; Forlemu, Neville Y; Anagho, Leonard E

    2017-01-01

    Mathematics competency in nursing education and practice may be addressed by an instructional variation of the traditional dimensional analysis technique typically presented in beginning chemistry courses. The authors studied 73 beginning chemistry students using the typical dimensional analysis technique and the variation technique. Student quantitative problem-solving performance was evaluated. Students using the variation technique scored significantly better (18.3 of 20 points, p mathematics competency and problem-solving ability in both education and practice. [J Nurs Educ. 2017;56(1):22-26.]. Copyright 2017, SLACK Incorporated.

  8. Robust Adaptive Lasso method for parameter's estimation and variable selection in high-dimensional sparse models.

    Science.gov (United States)

    Wahid, Abdul; Khan, Dost Muhammad; Hussain, Ijaz

    2017-01-01

    High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL) method is proposed which is based on pearson residuals weighting scheme. The weight function determines the compatibility of each observations and downweight it if they are inconsistent with the assumed model. It is observed that RAL estimator can correctly select the covariates with non-zero coefficients and can estimate parameters, simultaneously, not only in the presence of influential observations, but also in the presence of high multicolliearity. We also discuss the model selection oracle property and the asymptotic normality of the RAL. Simulations findings and real data examples also demonstrate the better performance of the proposed penalized regression approach.

  9. Two-dimensional liquid chromatography and its application in traditional Chinese medicine analysis and metabonomic investigation.

    Science.gov (United States)

    Li, Zheng; Chen, Kai; Guo, Meng-zhe; Tang, Dao-quan

    2016-01-01

    Two-dimensional liquid chromatography has become an attractive analytical tool for the separation of complex samples due to its enhanced selectivity, peak capacity, and resolution compared with one-dimensional liquid chromatography. Recently, more attention has been drawn on the application of this separation technique in studies concerning traditional Chinese medicines, metabonomes, proteomes, and other complex mixtures. In this review, we aim to examine the application of two-dimensional liquid chromatography in traditional Chinese medicine analysis and metabonomic investigation. The classification and evaluation indexes were first introduced. Then, various switching methods were summarized when used in an on-line two-dimensional liquid chromatography system. Finally, the applications of this separation technique in traditional Chinese medicine analysis and metabonomic investigation were discussed on the basis of specific studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Structural analysis of high-dimensional basins of attraction

    OpenAIRE

    Martiniani, Stefano; Schrenk, K. Julian; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan

    2016-01-01

    We propose an efficient Monte Carlo method for the computation of the volumes of high-dimensional bodies with arbitrary shape. We start with a region of known volume within the interior of the manifold and then use the multistate Bennett acceptance-ratio method to compute the dimensionless free-energy difference between a series of equilibrium simulations performed within this object. The method produces results that are in excellent agreement with thermodynamic integration, as well as a dire...

  11. A Fibonacci collocation method for solving a class of Fredholm–Volterra integral equations in two-dimensional spaces

    Directory of Open Access Journals (Sweden)

    Farshid Mirzaee

    2014-06-01

    Full Text Available In this paper, we present a numerical method for solving two-dimensional Fredholm–Volterra integral equations (F-VIE. The method reduces the solution of these integral equations to the solution of a linear system of algebraic equations. The existence and uniqueness of the solution and error analysis of proposed method are discussed. The method is computationally very simple and attractive. Finally, numerical examples illustrate the efficiency and accuracy of the method.

  12. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  13. Thermal analysis of two-dimensional structures in fire

    Directory of Open Access Journals (Sweden)

    I. Pierin

    Full Text Available The structural materials, as reinforced concrete, steel, wood and aluminum, when heated have their mechanical proprieties degraded. In fire, the structures are subject to elevated temperatures and consequently the load capacity of the structural elements is reduced. The Brazilian and European standards show the minimal dimensions for the structural elements had an adequate bearing capacity in fire. However, several structural checks are not contemplated in methods provided by the standards. In these situations, the knowledge of the temperature distributions inside of structural elements as function of time of exposition is required. The aim of this paper is present software developed by the authors called ATERM. The software performs the thermal transient analysis of two-dimensional structures. The structure may be formed of any material and heating is provided by means of a curve of temperature versus time. The data input and the visualization of the results is performed thought the GiD software. Several examples are compared with software Super TempCalc and ANSYS. Some conclusions and recommendations about the thermal analysis are presented

  14. Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis.

    Science.gov (United States)

    Sacha, Dominik; Zhang, Leishi; Sedlmair, Michael; Lee, John A; Peltonen, Jaakko; Weiskopf, Daniel; North, Stephen C; Keim, Daniel A

    2017-01-01

    Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a "human in the loop" process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.

  15. Dimensional analysis of acoustically propagated signals

    Science.gov (United States)

    Hansen, Scott D.; Thomson, Dennis W.

    1993-01-01

    Traditionally, long term measurements of atmospherically propagated sound signals have consisted of time series of multiminute averages. Only recently have continuous measurements with temporal resolution corresponding to turbulent time scales been available. With modern digital data acquisition systems we now have the capability to simultaneously record both acoustical and meteorological parameters with sufficient temporal resolution to allow us to examine in detail relationships between fluctuating sound and the meteorological variables, particularly wind and temperature, which locally determine the acoustic refractive index. The atmospheric acoustic propagation medium can be treated as a nonlinear dynamical system, a kind of signal processor whose innards depend on thermodynamic and turbulent processes in the atmosphere. The atmosphere is an inherently nonlinear dynamical system. In fact one simple model of atmospheric convection, the Lorenz system, may well be the most widely studied of all dynamical systems. In this paper we report some results of our having applied methods used to characterize nonlinear dynamical systems to study the characteristics of acoustical signals propagated through the atmosphere. For example, we investigate whether or not it is possible to parameterize signal fluctuations in terms of fractal dimensions. For time series one such parameter is the limit capacity dimension. Nicolis and Nicolis were among the first to use the kind of methods we have to study the properties of low dimension global attractors.

  16. Analysis of multi-dimensional confocal images

    International Nuclear Information System (INIS)

    Samarabandu, J.K.; Acharya, R.; Edirisinghe, C.D.; Cheng, P.C.; Lin, T.H.

    1991-01-01

    In this paper, a confocal image understanding system is developed which used the blackboard model of problem solving to achieve computerized identification and characterization of confocal fluorescent images (serial optical sections). The system is capable of identifying a large percentage of structures (e.g. cell nucleus) in the presence of background noise and non specific staining of cellular structures. The blackboard architecture provides a convenient framework within which a combination of image processing techniques can be applied to successively refine the input image. The system is organized to find the surfaces of highly visible structures first, using simple image processing techniques and then to adjust and fill in the missing areas of these object surfaces using external knowledge, and a number of more complex image processing techniques when necessary. As a result, the image analysis system is capable of obtaining morphometrical parameters such as surface area, volume and position of structures of interest automatically

  17. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  18. The multi-dimensional analysis of social exclusion

    OpenAIRE

    Levitas, Ruth; Pantazis, Christina; Fahmy, Eldin; Gordon, David; Lloyd, Eva; Patsios, Demy

    2007-01-01

    The purpose of this project was to review existing sources on multi-dimensional disadvantage or severe forms of social exclusion characterised as ‘deep exclusion’; to recommend possibilities for secondary analysis of existing data sets to explore the dynamics of ‘deep exclusion’; to identify any relevant gaps in the knowledge base; and to recommend research strategies for filling such gaps.

  19. Two-dimensional hazard estimation for longevity analysis

    DEFF Research Database (Denmark)

    Fledelius, Peter; Guillen, M.; Nielsen, J.P.

    2004-01-01

    We investigate developments in Danish mortality based on data from 1974-1998 working in a two-dimensional model with chronological time and age as the two dimensions. The analyses are done with non-parametric kernel hazard estimation techniques. The only assumption is that the mortality surface i...... for analysis of economic implications arising from mortality changes....

  20. and three-dimensional models for analysis of optical absorption

    Indian Academy of Sciences (India)

    Unknown

    Goldberg et al 1975; Kam and Parkinson 1982; Baglio et al 1982, 1983; Oritz 1995; Li et al 1996) has been carried out on WS2, there is no detailed analysis of the absorption spectra obtained from the single crystals of WS2 on the basis of two- and three-dimensional models. We have therefore carried out this study and the.

  1. Dimensional Analysis Keeping Track of Length, Mass, Time

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 11. Dimensional Analysis Keeping Track of Length, Mass, Time. Nagesha N Rao. General Article Volume 1 Issue 11 November 1996 pp 29-41. Fulltext. Click here to view fulltext PDF. Permanent link:

  2. Dimensional analysis of flame angles versus wind speed

    Science.gov (United States)

    Robert E. Martin; Mark A. Finney; Domingo M. Molina; David B. Sapsis; Scott L. Stephens; Joe H. Scott; David R. Weise

    1991-01-01

    Dimensional analysis has potential to help explain and predict physical phenomena, but has been used very little in studies of wildland fire behavior. By combining variables into dimensionless groups, the number of variables to be handled and the experiments to be run is greatly reduced. A low velocity wind tunnel was constructed, and methyl, ethyl, and isopropyl...

  3. Weakly nonlinear analysis of two dimensional sheared granular flow

    NARCIS (Netherlands)

    Saitoh, K.; Hayakawa, Hisao

    2011-01-01

    Weakly nonlinear analysis of a two dimensional sheared granular flow is carried out under the Lees-Edwards boundary condition. We derive the time dependent Ginzburg–Landau equation of a disturbance amplitude starting from a set of granular hydrodynamic equations and discuss the bifurcation of the

  4. The Topology Optimization of Three-dimensional Cooling Fins by the Internal Element Connectivity Parameterization Method

    International Nuclear Information System (INIS)

    Yoo, Sung Min; Kim, Yoon Young

    2007-01-01

    This work is concerned with the topology optimization of three-dimensional cooling fins or heat sinks. Motivated by earlier success of the Internal Element Connectivity Method (I-ECP) method in two dimensional problems, the extension of I-ECP to three-dimensional problems is carried out. The main efforts were made to maintain the numerical trouble-free characteristics of I-ECP for full three-dimensional problems; a serious numerical problem appearing in thermal topology optimization is erroneous temperature undershooting. The effectiveness of the present implementation was checked through the design optimization of three-dimensional fins

  5. Finite element analysis of one–dimensional hydrodynamic ...

    African Journals Online (AJOL)

    In this research work, we consider the one dimensional hydrodynamic dispersion of a reactive solute in electroosmotic flow. We present results demonstrating the utility of finite element methods to simulate and visualize hydrodynamic dispersion in the electroosmotic flow. From examination of concentration profile, effective ...

  6. Simulation of Thermal Stratification in BWR Suppression Pools with One Dimensional Modeling Method

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Ling Zou; Hongbin Zhang

    2014-01-01

    The suppression pool in a boiling water reactor (BWR) plant not only is the major heat sink within the containment system, but also provides the major emergency cooling water for the reactor core. In several accident scenarios, such as a loss-of-coolant accident and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; the pool temperature distribution also affects the NPSHa (available net positive suction head) and therefore the performance of the Emergency Core Cooling System and Reactor Core Isolation Cooling System pumps that draw cooling water back to the core. Current safety analysis codes use zero dimensional (0-D) lumped parameter models to calculate the energy and mass balance in the pool; therefore, they have large uncertainties in the prediction of scenarios in which stratification and mixing are important. While three-dimensional (3-D) computational fluid dynamics (CFD) methods can be used to analyze realistic 3-D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, resulting in a long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code (Berkeley mechanistic MIXing code in C++) has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by one-dimensional (1-D) transient partial differential equations and substructures (such as free or wall jets) are modeled with 1-D integral models. This allows very large reductions in computational effort compared to multi-dimensional CFD modeling. One heat-up experiment performed at the Finland POOLEX facility, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, is used for

  7. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    The present thesis considers data analysis of problems with many features in relation to the number of observations (large p, small n problems). The theoretical considerations for such problems are outlined including the curses and blessings of dimensionality, and the importance of dimension...... reduction. In this context the trade off between a rich solution which answers the questions at hand and a simple solution which generalizes to unseen data is described. For all of the given data examples labelled output exists and the analyses are therefore limited to supervised settings. Three novel...... classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...

  8. Methods of nonlinear analysis

    CERN Document Server

    Bellman, Richard Ernest

    1970-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  9. Two-dimensional analysis of motion artifacts, including flow effects

    International Nuclear Information System (INIS)

    Litt, A.M.; Brody, A.S.; Spangler, R.A.; Scott, P.D.

    1990-01-01

    The effects of motion on magnetic resonance images have been theoretically analyzed for the case of a point-like object in simple harmonic motion and for other one-dimensional trajectories. The authors of this paper extend this analysis to a generalized two-dimensional magnetization with an arbitrary motion trajectory. The authors provide specific solutions for the clinically relevant cases of the cross-sections of cylindrical objects in the body, such as the aorta, which has a roughly one-dimensional, simple harmonic motion during respiration. By extending the solution to include inhomogeneous magnetizations, the authors present a model which allows the effects of motion artifacts and flow artifacts to be analyzed simultaneously

  10. The analysis of one-dimensional reactor kinetics benchmark computations

    International Nuclear Information System (INIS)

    Sidell, J.

    1975-11-01

    During March 1973 the European American Committee on Reactor Physics proposed a series of simple one-dimensional reactor kinetics problems, with the intention of comparing the relative efficiencies of the numerical methods employed in various codes, which are currently in use in many national laboratories. This report reviews the contributions submitted to this benchmark exercise and attempts to assess the relative merits and drawbacks of the various theoretical and computer methods. (author)

  11. Dimensionality Reduction of Hyperspectral Image with Graph-Based Discriminant Analysis Considering Spectral Similarity

    Directory of Open Access Journals (Sweden)

    Fubiao Feng

    2017-03-01

    Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.

  12. Three-dimensional dynamic response modeling of floating nuclear plants using finite element methods

    International Nuclear Information System (INIS)

    Johnson, H.W.; Vaish, A.K.; Porter, F.L.; McGeorge, R.

    1976-01-01

    A modelling technique which can be used to obtain the dynamic response of a floating nuclear plant (FNP) moored in an artificial basin is presented. Hydrodynamic effects of the seawater in the basin have a significant impact on the response of the FNP and must be included. A three-dimensional model of the platform and mooring system (using beam elements) is used, with the hydrodynamic effects represented by added mass and damping. For an essentially square plant in close proximity to the site structures, the three-dimensional nature of the basin must be considered in evaluating the added mass and damping. However, direct solutions for hydrodynamic effects with complex basin geometry are not, as yet, available. A method for estimating these effects from planar finite element analysis is developed. (Auth.)

  13. Applying Clustering to Statistical Analysis of Student Reasoning about Two-Dimensional Kinematics

    Science.gov (United States)

    Springuel, R. Padraic; Wittman, Michael C.; Thompson, John R.

    2007-01-01

    We use clustering, an analysis method not presently common to the physics education research community, to group and characterize student responses to written questions about two-dimensional kinematics. Previously, clustering has been used to analyze multiple-choice data; we analyze free-response data that includes both sketches of vectors and…

  14. SOLUTION TO THE PROBLEMS OF APPLIED MECHANICS USING THE SIMILARITY THEORY AND DIMENSIONAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Dzhinchvelashvili Guram Avtandilovich

    2016-06-01

    Full Text Available In the present work the author announces a book of Professor G.S. Vardanyan in which the bases of similarity theory and dimensional analysis are systematically set forward. It discusses the possibilities of these methods and their application in solving not only fundamental problems of solid mechanics, but also applied tasks, in particular, complicated problems of engineering seismology.

  15. Three dimensional computerized microtomography in the analysis of sculpture.

    Science.gov (United States)

    Badde, Aurelia; Illerhaus, Bernhard

    2008-01-01

    The Alte Nationalgalerie, Staatliche Museen zu Berlin (SMB) and the Federal Institute for Materials Research and Testing (BAM) tested the accomplishment of the three dimensional computerized microtomography (3D-microCT)-a new flat panel detector computerized tomography (CT) system at the BAM with extended energy range, with high voltage X-ray tubes (330 and 225 kV), with micrometer focal spot size and micrometer resolution and enlarged object size (up to 70 cm diameter)-for examining plaster statues. The high spatial and density resolution of the tomograph enable detailed insights into the individual work processes of the investigated cast plaster statues. While initiated in support of the conservation process, computed tomography (CT) analysis has assisted in revealing relative chronologies within the series of the cast works of art, thus serving as a valuable tool in the art-historical appraisal of the oeuvres. The image-processing systems visualize the voids and cracks within and the cuts through the original cast works. Internal structures, armoring, sculptural reworking as well as restorative interventions are virtually reconstructed. The authors are currently employing the 3D-microCT systems at the BAM into the detection of defects in Carrara marble sculpture. Microcracks, fractures, and material flaws are visualized at spatial resolution down to 10 microm. Computerized reconstruction of ultrasound tomography is verified by analyzing correlations in the results obtained from the complementary application of these two non-destructive testing (NDT) methods of diagnosis.

  16. A flexible tactile sensor calibration method based on an air-bearing six-dimensional force measurement platform

    Science.gov (United States)

    Huang, Bin

    2015-07-01

    A number of common issues related to the process of flexible tactile sensor calibration are discussed in this paper, and an estimate of the accuracy of classical calibration methods, as represented by a weight-pulley device, is presented. A flexible tactile sensor calibration method that is based on a six-dimensional force measurement is proposed on the basis of a theoretical analysis. A high-accuracy flexible tactile sensor calibration bench based on the air-bearing six-dimensional force measurement principle was developed to achieve a technically challenging measurement accuracy of 2% full scale (FS) for three-dimensional (3D) flexible tactile sensor calibration. The experimental results demonstrate that the accuracy of the air-bearing six-dimensional force measurement platform can reach 0.2% FS. Thus, the system satisfies the 3D flexible tactile sensor calibration requirement of 2% FS.

  17. Multi-dimensional Code Development for Safety Analysis of LMR

    Energy Technology Data Exchange (ETDEWEB)

    Ha, K. S.; Jeong, H. Y.; Kwon, Y. M.; Lee, Y. B

    2006-08-15

    A liquid metal reactor loaded a metallic fuel has the inherent safety mechanism due to the several negative reactivity feedback. Although this feature demonstrated through experiments in the EBR-II, any of the computer programs until now did not exactly analyze it because of the complexity of the reactivity feedback mechanism. A multi-dimensional detail program was developed through the International Nuclear Energy Research Initiative(INERI) from 2003 to 2005. This report includes the numerical coupling the multi-dimensional program and SSC-K code which is used to the safety analysis of liquid metal reactors in KAERI. The coupled code has been proved by comparing the analysis results using the code with the results using SAS-SASSYS code of ANL for the UTOP, ULOF, and ULOHS applied to the safety analysis for KALIMER-150.

  18. Canonical and symplectic analysis for three dimensional gravity without dynamics

    International Nuclear Information System (INIS)

    Escalante, Alberto; Osmart Ochoa-Gutiérrez, H.

    2017-01-01

    In this paper a detailed Hamiltonian analysis of three-dimensional gravity without dynamics proposed by V. Hussain is performed. We report the complete structure of the constraints and the Dirac brackets are explicitly computed. In addition, the Faddeev–Jackiw symplectic approach is developed; we report the complete set of Faddeev–Jackiw constraints and the generalized brackets, then we show that the Dirac and the generalized Faddeev–Jackiw brackets coincide to each other. Finally, the similarities and advantages between Faddeev–Jackiw and Dirac’s formalism are briefly discussed. - Highlights: • We report the symplectic analysis for three dimensional gravity without dynamics. • We report the Faddeev–Jackiw constraints. • A pure Dirac’s analysis is performed. • The complete structure of Dirac’s constraints is reported. • We show that symplectic and Dirac’s brackets coincide to each other.

  19. Canonical and symplectic analysis for three dimensional gravity without dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Escalante, Alberto, E-mail: aescalan@ifuap.buap.mx [Instituto de Física, Benemérita Universidad Autónoma de Puebla, Apartado Postal J-48 72570, Puebla, Pue. (Mexico); Osmart Ochoa-Gutiérrez, H. [Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla, Apartado postal 1152, 72001 Puebla, Pue. (Mexico)

    2017-03-15

    In this paper a detailed Hamiltonian analysis of three-dimensional gravity without dynamics proposed by V. Hussain is performed. We report the complete structure of the constraints and the Dirac brackets are explicitly computed. In addition, the Faddeev–Jackiw symplectic approach is developed; we report the complete set of Faddeev–Jackiw constraints and the generalized brackets, then we show that the Dirac and the generalized Faddeev–Jackiw brackets coincide to each other. Finally, the similarities and advantages between Faddeev–Jackiw and Dirac’s formalism are briefly discussed. - Highlights: • We report the symplectic analysis for three dimensional gravity without dynamics. • We report the Faddeev–Jackiw constraints. • A pure Dirac’s analysis is performed. • The complete structure of Dirac’s constraints is reported. • We show that symplectic and Dirac’s brackets coincide to each other.

  20. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  1. Analysis techniques for two-dimensional infrared data

    Science.gov (United States)

    Winter, E. M.; Smith, M. C.

    1978-01-01

    In order to evaluate infrared detection and remote sensing systems, it is necessary to know the characteristics of the observational environment. For both scanning and staring sensors, the spatial characteristics of the background may be more of a limitation to the performance of a remote sensor than system noise. This limitation is the so-called spatial clutter limit and may be important for systems design of many earth application and surveillance sensors. The data used in this study is two dimensional radiometric data obtained as part of the continuing NASA remote sensing programs. Typical data sources are the Landsat multi-spectral scanner (1.1 micrometers), the airborne heat capacity mapping radiometer (10.5 - 12.5 micrometers) and various infrared data sets acquired by low altitude aircraft. Techniques used for the statistical analysis of one dimensional infrared data, such as power spectral density (PSD), exceedance statistics, etc. are investigated for two dimensional applicability. Also treated are two dimensional extensions of these techniques (2D PSD, etc.), and special techniques developed for the analysis of 2D data.

  2. Are Three-Dimensional Monitors More Appropriate Than Two-Dimensional Monitors in the Three-Dimensional Analysis?

    Science.gov (United States)

    Ahn, Jaemyung; Hong, Jongrak

    2017-01-01

    In orthognathic surgery, three-dimensional (3D) program-based analysis of 3D reconstructions of computed tomography (CT) images is commonly used, and images viewed on a monitor. The authors compared the coordinates of facial landmarks on images in a 3D program displayed on a two-dimensional (2D) (standard) or 3D monitor. Facial bone CT data from 30 patients were reconstructed in 3D. Four researchers identified 33 facial landmarks, 3 times each on 2D and 3D monitors, for each patient, by their x-, y-, and z-coordinates. The time taken to complete these identifications was measured.For each set of coordinates, the average intraclass coefficient was >0.8 for 2D and 3D analyses, as well as among 4 readers. It took on average of 2 minutes 46 seconds to identify the landmarks on the 2D monitor, compared with 2 minutes 25 seconds on the 3D monitor. The variance of individual coordinates differed when measured on 2D or 3D monitor. The landmarks affected were located near the median region of the facial area, and are important for setting the reference sagittal plane during diagnosis for orthognathic surgery. Therefore, identifying facial landmarks using 3D monitors may be helpful for conducting accurate facial diagnoses.

  3. A method for measuring post-extraction alveolar dimensional changes with volumetric computed tomography.

    Science.gov (United States)

    Bontá, Hernán; Galli, Federico G; Caride, Facundo; Carranza, Nelson

    2012-01-01

    The aim of this study is to present a predictable method for evaluating dimensional changes in the alveolar ridge through cone beam computed tomography (CT). Twenty subjects with single-rooted tooth extraction indication were selected for this preliminary report, which is part of a larger ongoing investigation. After extraction, two CT scans were performed; the first within 24 hours post-extraction (TC1) and the second 6 months (TC2) later. A radiographic guide with a radiopaque element placed along the tooth axis was developed to locate the same plane of reference in two different CT scans. For each patient, backtrack analysis was performed in order to establish the reproducibility error of a predetermined point in space between two CT scans. Briefly, an anatomical landmark was selected and its coordinates to the radiopaque marker were recorded. One week later, the coordinates were followed backwards in the same CT scan to obtain the position where the reference point should be located. A similar process was carried out between two different CT scans taken 6 months apart. The distance between the anatomical reference and the obtained point of position was calculated to establish the accuracy of the method. Additionally, a novel method for evaluating dimensional changes of the alveolus after tooth extraction is presented. The backtrack analysis determined an average within-examiner discrepancy between both measurements from the same CT scan of 0.19 mm. SD +/- 0.05. With the method presented herein, a reference point in a CT scan can be accurately backtracked and located in a second CT scan taken six months later. Taken together they open the possibility of calculating dimensional changes that occur in the alveolar ridge over time, such as post-extraction alveolar resorption, or the bone volume gained after different augmentation procedures.

  4. Two-dimensional and three-dimensional left ventricular deformation analysis: a study in competitive athletes.

    Science.gov (United States)

    D'Ascenzi, Flavio; Solari, Marco; Mazzolai, Michele; Cameli, Matteo; Lisi, Matteo; Andrei, Valentina; Focardi, Marta; Bonifazi, Marco; Mondillo, Sergio

    2016-12-01

    Two-dimensional (2D) speckle-tracking echocardiography (STE) has clarified functional adaptations accompanying the morphological features of 'athlete's heart'. However, 2D STE has some limitations, potentially overcome by three-dimensional (3D) STE. Unfortunately, discrepancies between 2D- and 3D STE have been described. We therefore sought to evaluate whether dimensional and functional differences exist between athletes and controls and whether 2D and 3D left ventricular (LV) strains differ in athletes. One hundred sixty-one individuals (91 athletes, 70 controls) were analysed. Athletes were members of professional sports teams. 2D and 3D echocardiography and STE were used to assess LV size and function. Bland-Altman analysis was used to estimate the level of agreement between 2D and 3D STE. Athletes had greater 2D and 3D-derived LV dimensions and LV mass (p dimensional longitudinal and circumferential strain values were lower (p < 0.0001 for both) while 3D radial strain was greater, as compared with 2D STE (p < 0.001). Bland-Altman plots demonstrated the presence of an absolute systematic error between 2D and 3D STE to analyse LV myocardial deformation. 3D STE is a useful and feasible technique for the assessment of myocardial deformation with the potential to overcome the limitations of 2D imaging. However, discrepancies exist between 2D and 3D-derived strain suggesting that 2D and 3D STE are not interchangeable.

  5. TMCC: a transient three-dimensional neutron transport code by the direct simulation method - 222

    International Nuclear Information System (INIS)

    Shen, H.; Li, Z.; Wang, K.; Yu, G.

    2010-01-01

    A direct simulation method (DSM) is applied to solve the transient three-dimensional neutron transport problems. DSM is based on the Monte Carlo method, and can be considered as an application of the Monte Carlo method in the specific type of problems. In this work, the transient neutronics problem is solved by simulating the dynamic behaviors of neutrons and precursors of delayed neutrons during the transient process. DSM gets rid of various approximations which are always necessary to other methods, so it is precise and flexible in the requirement of geometric configurations, material compositions and energy spectrum. In this paper, the theory of DSM is introduced first, and the numerical results obtained with the new transient analysis code, named TMCC (Transient Monte Carlo Code), are presented. (authors)

  6. Eigenanatomy: sparse dimensionality reduction for multi-modal medical image analysis.

    Science.gov (United States)

    Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B

    2015-02-01

    Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into multi-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Eigenanatomy: Sparse Dimensionality Reduction for Multi-Modal Medical Image Analysis

    Science.gov (United States)

    Kandel, Benjamin M.; Wang, Danny JJ; Gee, James C.; Avants, Brian B.

    2014-01-01

    Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into mult-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available. PMID:25448483

  8. Temporal coupled mode analysis of one-dimensional magneto-photonic crystals with cavity structures

    Energy Technology Data Exchange (ETDEWEB)

    Saghirzadeh Darki, Behnam, E-mail: b.saghirzadeh@ec.iut.ac.ir; Zeidaabadi Nezhad, Abolghasem; Firouzeh, Zaker Hossein

    2016-12-01

    In this paper, we propose the time-dependent coupled mode analysis of one-dimensional magneto-photonic crystals including one, two or multiple defect layers. The performance of the structures, namely the total transmission, Faraday rotation and ellipticity, is obtained using the proposed method. The results of the developed analytic approach are verified by comparing them to the results of the exact numerical transfer matrix method. Unlike the widely used numerical method, our proposed analytic method seems promising for the synthesis as well as the analysis purposes. Moreover, the proposed method has not the restrictions of the previously examined analytic methods. - Highlights: • A time-dependent coupled mode analysis is proposed for the cavity-type 1D MPCs. • Analytical formalism is presented for the single, double and multiple-defect MPCs. • Transmission, Faraday rotation and ellipticity are gained using the proposed method. • The proposed analytic method has advantages over the previously examined methods.

  9. Impact of comprehensive two-dimensional gas chromatography with mass spectrometry on food analysis.

    Science.gov (United States)

    Tranchida, Peter Q; Purcaro, Giorgia; Maimone, Mariarosa; Mondello, Luigi

    2016-01-01

    Comprehensive two-dimensional gas chromatography with mass spectrometry has been on the separation-science scene for about 15 years. This three-dimensional method has made a great positive impact on various fields of research, and among these that related to food analysis is certainly at the forefront. The present critical review is based on the use of comprehensive two-dimensional gas chromatography with mass spectrometry in the untargeted (general qualitative profiling and fingerprinting) and targeted analysis of food volatiles; attention is focused not only on its potential in such applications, but also on how recent advances in comprehensive two-dimensional gas chromatography with mass spectrometry will potentially be important for food analysis. Additionally, emphasis is devoted to the many instances in which straightforward gas chromatography with mass spectrometry is a sufficiently-powerful analytical tool. Finally, possible future scenarios in the comprehensive two-dimensional gas chromatography with mass spectrometry food analysis field are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Robust L1-norm two-dimensional linear discriminant analysis.

    Science.gov (United States)

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Electron tomography, three-dimensional Fourier analysis and colour prediction of a three-dimensional amorphous biophotonic nanostructure

    Science.gov (United States)

    Shawkey, Matthew D.; Saranathan, Vinodkumar; Pálsdóttir, Hildur; Crum, John; Ellisman, Mark H.; Auer, Manfred; Prum, Richard O.

    2009-01-01

    Organismal colour can be created by selective absorption of light by pigments or light scattering by photonic nanostructures. Photonic nanostructures may vary in refractive index over one, two or three dimensions and may be periodic over large spatial scales or amorphous with short-range order. Theoretical optical analysis of three-dimensional amorphous nanostructures has been challenging because these structures are difficult to describe accurately from conventional two-dimensional electron microscopy alone. Intermediate voltage electron microscopy (IVEM) with tomographic reconstruction adds three-dimensional data by using a high-power electron beam to penetrate and image sections of material sufficiently thick to contain a significant portion of the structure. Here, we use IVEM tomography to characterize a non-iridescent, three-dimensional biophotonic nanostructure: the spongy medullary layer from eastern bluebird Sialia sialis feather barbs. Tomography and three-dimensional Fourier analysis reveal that it is an amorphous, interconnected bicontinuous matrix that is appropriately ordered at local spatial scales in all three dimensions to coherently scatter light. The predicted reflectance spectra from the three-dimensional Fourier analysis are more precise than those predicted by previous two-dimensional Fourier analysis of transmission electron microscopy sections. These results highlight the usefulness, and obstacles, of tomography in the description and analysis of three-dimensional photonic structures. PMID:19158016

  12. Physical quantities and dimensional analysis: from mechanics to quantum gravity

    OpenAIRE

    Trancanelli, Diego

    2015-01-01

    Physical quantities and physical dimensions are among the first concepts encountered by students in their undergraduate career. In this pedagogical review, I will start from these concepts and, using the powerful tool of dimensional analysis, I will embark in a journey through various branches of physics, from basic mechanics to quantum gravity. I will also discuss a little bit about the fundamental constants of Nature, the so-called "cube of Physics", and the natural system of units.

  13. Analysis of two dimensional signals via curvelet transform

    Science.gov (United States)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  14. Method development and application of offline two-dimensional liquid chromatography/quadrupole time-of-flight mass spectrometry-fast data directed analysis for comprehensive characterization of the saponins from Xueshuantong Injection.

    Science.gov (United States)

    Yang, Wenzhi; Zhang, Jingxian; Yao, Changliang; Qiu, Shi; Chen, Ming; Pan, Huiqin; Shi, Xiaojian; Wu, Wanying; Guo, Dean

    2016-09-05

    Xueshuantong Injection (XSTI), derived from Notoginseng total saponins, is a popular traditional Chinese medicine injection for the treatment of thrombus-resultant diseases. Current knowledge on its therapeutic basis is limited to five major saponins, whereas those minor ones are rarely investigated. We herein develop an offline two-dimensional liquid chromatography/quadrupole time-of-flight mass spectrometry-fast data directed analysis (offline 2D LC/QTOF-Fast DDA) approach to systematically characterize the saponins contained in XSTI. Key parameters affecting chromatographic separation in 2D LC (including stationary phase, mobile phase, column temperature, and gradient elution program) and the detection by QTOF MS (involving spray voltage, cone voltage, and ramp collision energy) were optimized in sequence. The configured offline 2D LC system showed an orthogonality of 0.84 and a theoretical peak capacity of 8976. Total saponins in XSTI were fractionated into eleven samples by the first-dimensional hydrophilic interaction chromatography, which were further analyzed by reversed-phase UHPLC/QTOF-Fast DDA in negative ion mode. The fragmentation features evidenced from 36 saponin reference standards, high-accuracy MS and Fast-DDA-MS(2) data, elemental composition (C<80, H<120, O<50), double-bond equivalent (DBE 5-15), and searching an in-house library of Panax notoginseng, were simultaneously utilized for structural elucidation. Ultimately, 148 saponins were separated and characterized, and 80 have not been isolated from P. notoginseng. An in-depth depiction of the chemical composition of XSTI was achieved. The results obtained would benefit better understanding of the therapeutic basis and significant promotion on the quality standard of XSTI as well as other homologous products. Copyright © 2016. Published by Elsevier B.V.

  15. Quantum discriminant analysis for dimensionality reduction and classification

    Science.gov (United States)

    Cong, Iris; Duan, Luming

    2016-07-01

    We present quantum algorithms to efficiently perform discriminant analysis for dimensionality reduction and classification over an exponentially large input data set. Compared with the best-known classical algorithms, the quantum algorithms show an exponential speedup in both the number of training vectors M and the feature space dimension N. We generalize the previous quantum algorithm for solving systems of linear equations (2009 Phys. Rev. Lett. 103 150502) to efficiently implement a Hermitian chain product of k trace-normalized N ×N Hermitian positive-semidefinite matrices with time complexity of O({log}(N)). Using this result, we perform linear as well as nonlinear Fisher discriminant analysis for dimensionality reduction over M vectors, each in an N-dimensional feature space, in time O(p {polylog}({MN})/{ε }3), where ɛ denotes the tolerance error, and p is the number of principal projection directions desired. We also present a quantum discriminant analysis algorithm for data classification with time complexity O({log}({MN})/{ε }3).

  16. Complementing Gender Analysis Methods.

    Science.gov (United States)

    Kumar, Anant

    2016-01-01

    The existing gender analysis frameworks start with a premise that men and women are equal and should be treated equally. These frameworks give emphasis on equal distribution of resources between men and women and believe that this will bring equality which is not always true. Despite equal distribution of resources, women tend to suffer and experience discrimination in many areas of their lives such as the power to control resources within social relationships, and the need for emotional security and reproductive rights within interpersonal relationships. These frameworks believe that patriarchy as an institution plays an important role in women's oppression, exploitation, and it is a barrier in their empowerment and rights. Thus, some think that by ensuring equal distribution of resources and empowering women economically, institutions like patriarchy can be challenged. These frameworks are based on proposed equality principle which puts men and women in competing roles. Thus, the real equality will never be achieved. Contrary to the existing gender analysis frameworks, the Complementing Gender Analysis framework proposed by the author provides a new approach toward gender analysis which not only recognizes the role of economic empowerment and equal distribution of resources but suggests to incorporate the concept and role of social capital, equity, and doing gender in gender analysis which is based on perceived equity principle, putting men and women in complementing roles that may lead to equality. In this article the author reviews the mainstream gender theories in development from the viewpoint of the complementary roles of gender. This alternative view is argued based on existing literature and an anecdote of observations made by the author. While criticizing the equality theory, the author offers equity theory in resolving the gender conflict by using the concept of social and psychological capital.

  17. Pulmonary vasculature in dogs assessed by three-dimensional fractal analysis and chemometrics.

    Science.gov (United States)

    Müller, Anna V; Marschner, Clara B; Kristensen, Annemarie T; Wiinberg, Bo; Sato, Amy F; Rubio, Jose M A; McEvoy, Fintan J

    2017-11-01

    Fractal analysis of canine pulmonary vessels could allow quantification of their space-filling properties. Aims of this prospective, analytical, cross-sectional study were to describe methods for reconstructing three dimensional pulmonary arterial vascular trees from computed tomographic pulmonary angiogram, applying fractal analyses of these vascular trees in dogs with and without diseases that are known to predispose to thromboembolism, and testing the hypothesis that diseased dogs would have a different fractal dimension than healthy dogs. A total of 34 dogs were sampled. Based on computed tomographic pulmonary angiograms findings, dogs were divided in three groups: diseased with pulmonary thromboembolism (n = 7), diseased but without pulmonary thromboembolism (n = 21), and healthy (n = 6). An observer who was aware of group status created three-dimensional pulmonary artery vascular trees for each dog using a semiautomated segmentation technique. Vascular three-dimensional reconstructions were then evaluated using fractal analysis. Fractal dimensions were analyzed, by group, using analysis of variance and principal component analysis. Fractal dimensions were significantly different among the three groups taken together (P = 0.001), but not between the diseased dogs alone (P = 0.203). The principal component analysis showed a tendency of separation between healthy control and diseased groups, but not between groups of dogs with and without pulmonary thromboembolism. Findings indicated that computed tomographic pulmonary angiogram images can be used to reconstruct three-dimensional pulmonary arterial vascular trees in dogs and that fractal analysis of these three-dimensional vascular trees is a feasible method for quantifying the spatial relationships of pulmonary arteries. These methods could be applied in further research studies on pulmonary and vascular diseases in dogs. © 2017 American College of Veterinary Radiology.

  18. Experimental study on two-dimensional film flow with local measurement methods

    International Nuclear Information System (INIS)

    Yang, Jin-Hwa; Cho, Hyoung-Kyu; Kim, Seok; Euh, Dong-Jin; Park, Goon-Cherl

    2015-01-01

    Highlights: • An experimental study on the two-dimensional film flow with lateral air injection was performed. • The ultrasonic thickness gauge was used to measure the local liquid film thickness. • The depth-averaged PIV (Particle Image Velocimetry) method was applied to measure the local liquid film velocity. • The uncertainty of the depth-averaged PIV was quantified with a validation experiment. • Characteristics of two-dimensional film flow were classified following the four different flow patterns. - Abstract: In an accident condition of a nuclear reactor, multidimensional two-phase flows may occur in the reactor vessel downcomer and reactor core. Therefore, those have been regarded as important issues for an advanced thermal-hydraulic safety analysis. In particular, the multi-dimensional two-phase flow in the upper downcomer during the reflood phase of large break loss of coolant accident appears with an interaction between a downward liquid and a transverse gas flow, which determines the bypass flow rate of the emergency core coolant and subsequently, the reflood coolant flow rate. At present, some thermal-hydraulic analysis codes incorporate multidimensional modules for the nuclear reactor safety analysis. However, their prediction capability for the two-phase cross flow in the upper downcomer has not been validated sufficiently against experimental data based on local measurements. For this reason, an experimental study was carried out for the two-phase cross flow to clarify the hydraulic phenomenon and provide local measurement data for the validation of the computational tools. The experiment was performed in a 1/10 scale unfolded downcomer of Advanced Power Reactor 1400 (APR1400). Pitot tubes, a depth-averaged PIV method and ultrasonic thickness gauge were applied for local measurement of the air velocity, the liquid film velocity and the liquid film thickness, respectively. The uncertainty of the depth-averaged PIV method for the averaged

  19. Time-varying subspace dimensionality: Useful as a seismic signal detection method?

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2012-12-01

    We explore the application of dimensional analysis to the problem of anomaly detection in multichannel time series. These techniques, which have been used for real-time load management in large computer systems, revolve around the time-varying dimensionality estimates of the signal subspace. Our application is to multiple channels of incoming seismic waveform data, as from a large array or across a network. Subspace analysis has been applied to seismic data before, but the routine use of the method is for the identification of a particular signal type, and requires a priori information about the range of signals for which the algorithm is searching. In this paradigm, a known but variable source (such as a mining region or aftershock sequence) provides known waveforms that are assumed to span the space occupied by incoming events of interest. Singular value decomposition or principal components analysis of the identified waveforms will allow for the selection of basis vectors that define the subspace onto which incoming signals are projected, to determine whether they belong to the source population of interest. In our application we do not seek to compare incoming signals to previously identified waveforms, but instead we seek to detect anomalies from the background behavior across an array or network. The background seismic levels will describe a signal space whose dimension may change markedly when an earthquake or other signal of interest occurs. We explore a variety of means by which we can evaluate the time-varying dimensionality of the signal space, and we compare the detection performance to other standard event detection methods.

  20. New exact solutions of the (2 + 1)-dimensional breaking soliton system via an extended mapping method

    International Nuclear Information System (INIS)

    Ma Songhua; Fang Jianping; Zheng Chunlong

    2009-01-01

    By means of an extended mapping method and a variable separation method, a series of solitary wave solutions, periodic wave solutions and variable separation solutions to the (2 + 1)-dimensional breaking soliton system is derived.

  1. Improved algorithm for three-dimensional inverse method

    Science.gov (United States)

    Qiu, Xuwen

    An inverse method, which works for full 3D viscous applications in turbomachinery aerodynamic design, is developed. The method takes pressure loading and thickness distribution as inputs and computes the 3D-blade geometry. The core of the inverse method consists of two closely related steps, which are integrated into a time-marching procedure of a Navier-Stokes solver. First, the pressure loading condition is enforced while flow is allowed to cross the blade surfaces. A permeable blade boundary condition is developed here in order to be consistent with the propagation characteristics of the transient Navier-Stokes equations. In the second step, the blade geometry is adjusted so that the flow-tangency condition is satisfied for the new blade. A Non-Uniform Rational B-Spline (NURBS) model is used to represent the span-wise camber curves. The flow-tangency condition is then transformed into a general linear least squares fitting problem, which is solved by a robust Singular Value Decomposition (SVD) scheme. This blade geometry generation scheme allows the designer to have direct control over the smoothness of the calculated blade, and thus ensures the numerical stability during the iteration process. Numerical experiments show that this method is very accurate, efficient and robust. In target-shooting tests, the program was able to converge to the target blade accurately from a different initial blade. The speed of an inverse run is only about 15% slower than its analysis counterpart, which means a complete 3D viscous inverse design can be done in a matter of hours. The method is also proved to work well with the presence of clearance between the blade and the housing, a key factor to be considered in aerodynamic design. The method is first developed for blades without splitters, and is then extended to provide the capability of analyzing and designing machines with splitters. This gives designers an integrated environment where the aerodynamic design of both full

  2. Two-dimensional gel-based protein standardization verified by western blot analysis.

    Science.gov (United States)

    Haniu, Hisao; Watanabe, Daisuke; Kawashima, Yusuke; Matsumoto, Hiroyuki

    2015-01-01

    In data presentation of biochemical investigation the amount of a target protein is shown in the y-axis against the x-axis representing time, concentrations of various agents, or other parameters. Western blot is a versatile and convenient tool in such an analysis to quantify and display the amount of proteins. In western blot, so-called housekeeping gene product(s), or "housekeeping proteins," are widely used as internal standards. The rationale of using housekeeping proteins for standardization of western blot is based on the assumption that the expression of chosen housekeeping gene is always constant, which could be false under certain physiological or pathological conditions. We have devised a two-dimensional gel-based standardization method in which the protein content of each sample is determined by scanning the total protein density of two-dimensional gels and the expression of each protein is quantified as the density ratio of each protein divided by the density of the total proteins on the two-dimensional gel. The advantage of this standardization method is that it is not based on any presumed "housekeeping proteins" that are supposed to be being expressed constantly under all physiological conditions. We will show that the total density of a two-dimensional gel can render a reliable protein standardization parameter by running western blot analysis on one of the proteins analyzed by two-dimensional gels.

  3. An improved dimensionality reduction method for meta-transcriptome indexing based diseases classification.

    Science.gov (United States)

    Wang, Yin; Zhou, Yuhua; Li, Yixue; Ling, Zongxin; Zhu, Yan; Guo, Xiaokui; Sun, Hong

    2012-01-01

    Bacterial 16S Ribosomal RNAs profiling have been widely used in the classification of microbiota associated diseases. Dimensionality reduction is among the keys in mining high-dimensional 16S rRNAs' expression data. High levels of sparsity and redundancy are common in 16S rRNA gene microbial surveys. Traditional feature selection methods are generally restricted to measuring correlated abundances, and are limited in discrimination when so few microbes are actually shared across communities. Here we present a Feature Merging and Selection algorithm (FMS) to deal with 16S rRNAs' expression data. By integrating Linear Discriminant Analysis method, FMS can reduce the feature dimension with higher accuracy and preserve the relationship between different features as well. Two 16S rRNAs' expression datasets of pneumonia and dental decay patients were used to test the validity of the algorithm. Combined with SVM, FMS discriminated different classes of both pneumonia and dental caries better than other popular feature selection methods. FMS projects data into lower dimension with preservation of enough features, and thus improve the intelligibility of the result. The results showed that FMS is a more valid and reliable methods in feature reduction.

  4. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  5. In Vitro Evaluation of Dimensional Stability of Alginate Impressions after Disinfection by Spray and Immersion Methods

    Directory of Open Access Journals (Sweden)

    Fahimeh Hamedi Rad

    2010-12-01

    Full Text Available Background and aims. The most common method for alginate impression disinfection is spraying it with disinfecting agents, but some studies have shown that these impressions can be immersed, too. The aim of this study was to evaluate the dimensional stability of alginate impressions following disinfecting by spray and immersion methods. Materials and methods. Four common disinfecting agents (Sodium Hypochlorite, Micro 10, Glutaraldehyde and Deconex were selected and the impressions (n=108 were divided into four groups (n=24 and eight subgroups (n=12 for disinfecting by any of the four above-mentioned agents by spray or immersion methods. The control group (n=12 was not disinfected. Then the impressions were poured by type III Dental Stone Plaster in a standard method. The results were analyzed by descriptive methods (mean and standard deviation, t-test, two-way analysis of variance (ANOVA and Duncan test, using SPSS 14.0 software for windows. Results. The mean changes of length and height were significant between the various groups and disinfecting methods. Regarding the length, the greatest and the least amounts were related to Deconex and Micro 10 in the immersion method, respectively. Regarding height, the greatest and the least amounts were related to Glutaraldehyde and Deconex in the immersion method, respectively. Conclusion. Disinfecting alginate impressions by Sodium Hypochlorite, Deconex and Glutaraldehyde by immersion method is not recommended and it is better to disinfect alginate impressions by spraying of Micro 10, Sodium Hypochlorite, Glutaraldehyde and immersion in Micro 10.

  6. Novel 3-dimensional analysis to evaluate temporomandibular joint space and shape.

    Science.gov (United States)

    Ikeda, Renie; Oberoi, Snehlata; Wiley, David F; Woodhouse, Christian; Tallman, Melissa; Tun, Wint Wint; McNeill, Charles; Miller, Arthur J; Hatcher, David

    2016-03-01

    The purpose of this study was to present and validate a novel semiautomated method for 3-dimensional evaluation of the temporomandibular joint (TMJ) space and condylar and articular shapes using cone-beam computed tomographic data. The protocol for 3-dimensional analysis with the Checkpoint software (Stratovan, Davis, Calif) was established by analyzing cone-beam computed tomographic images of 14 TMJs representing a range of TMJ shape variations. Upon establishment of the novel method, analysis of 5 TMJs was further repeated by several investigators to assess the reliability of the analysis. Principal components analysis identified 3 key components that characterized how the condylar head shape varied among the 14 TMJs. Principal component analysis allowed determination of the minimum number of landmarks or patch density to define the shape variability in this sample. Average errors of landmark placement ranged from 1.15% to 3.65%, and none of the 121 landmarks showed significant average errors equal to or greater than 5%. Thus, the mean intraobserver difference was small and within the clinically accepted margin of error. Interobserver error was not significantly greater than intraobserver error, indicating that this is a reliable methodology. This novel semiautomatic method is a reliable tool for the 3-dimensional analysis of the TMJ including both the form and the space between the articular eminence and the condylar head. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. New dimensionality reduction methods for the representation of high dimensional 'omics' data.

    Science.gov (United States)

    Bécavin, Christophe; Benecke, Arndt

    2011-01-01

    'Omics' data have increased very rapidly in quantity and resolution, and are increasingly recognized as very valuable experimental observations in the systematic study of biological phenomena. The increase in availability, complexity and nonexpert interest in such data requires the urgent development of accurate and efficient dimensionality reduction and visualization techniques. To illustrate this need for new approaches we extensively discuss current methodology in terms of the limitations encountered. We then illustrate a recent example of how combinations of existing techniques can be used to overcome some of the present limitations, and discuss possible future directions for research in this important field of study.

  8. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Directory of Open Access Journals (Sweden)

    Sung-Hye You

    2017-01-01

    Full Text Available Purpose The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL, for measuring the volume of parathyroid glands. Methods Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D and three-dimensional (3D methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. Results The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. Conclusion The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  9. Three-dimensional analysis of two-pile caps

    Directory of Open Access Journals (Sweden)

    T.E.T. Buttignol

    Full Text Available This paper compares the results between a non-linear three-dimensional numerical analysis of pile caps with two piles and the experimental study conducted by Delalibera. It is verified the load-carrying capacity, the crack pattern distribution, the principal stress in concrete and steel, the deflection and the fracture of the pile cap. The numerical analysis is executed with the finite-element software ATENA 3D, considering a perfect bond between concrete and steel. The numerical and experimental results are presented and have demonstrated a good approximation, reasserting the results of the experimental model and corroborating the theory.

  10. Three-dimensional temporal reconstruction and analysis of plume images

    Science.gov (United States)

    Dhawan, Atam P.; Disimile, Peter J.; Peck, Charles, III

    1992-01-01

    An experiment with two subsonic jets generating a cross-flow was conducted as part of a study of the structural features of temporal reconstruction of plume images. The flow field structure was made visible using a direct injection flow visualization technique. It is shown that image analysis and temporal three-dimensional visualization can provide new information on the vortical structural dynamics of multiple jets in a cross-flow. It is expected that future developments in image analysis, quantification and interpretation, and flow visualization of rocket engine plume images may provide a tool for correlating the engine diagnostic features by interpreting the evolution of the structures in the plume.

  11. Novel Online Dimensionality Reduction Method with Improved Topology Representing and Radial Basis Function Networks

    Science.gov (United States)

    Ni, Shengqiao; Lv, Jiancheng; Cheng, Zhehao; Li, Mao

    2015-01-01

    This paper presents improvements to the conventional Topology Representing Network to build more appropriate topology relationships. Based on this improved Topology Representing Network, we propose a novel method for online dimensionality reduction that integrates the improved Topology Representing Network and Radial Basis Function Network. This method can find meaningful low-dimensional feature structures embedded in high-dimensional original data space, process nonlinear embedded manifolds, and map the new data online. Furthermore, this method can deal with large datasets for the benefit of improved Topology Representing Network. Experiments illustrate the effectiveness of the proposed method. PMID:26161960

  12. Novel Online Dimensionality Reduction Method with Improved Topology Representing and Radial Basis Function Networks.

    Directory of Open Access Journals (Sweden)

    Shengqiao Ni

    Full Text Available This paper presents improvements to the conventional Topology Representing Network to build more appropriate topology relationships. Based on this improved Topology Representing Network, we propose a novel method for online dimensionality reduction that integrates the improved Topology Representing Network and Radial Basis Function Network. This method can find meaningful low-dimensional feature structures embedded in high-dimensional original data space, process nonlinear embedded manifolds, and map the new data online. Furthermore, this method can deal with large datasets for the benefit of improved Topology Representing Network. Experiments illustrate the effectiveness of the proposed method.

  13. Novel Online Dimensionality Reduction Method with Improved Topology Representing and Radial Basis Function Networks.

    Science.gov (United States)

    Ni, Shengqiao; Lv, Jiancheng; Cheng, Zhehao; Li, Mao

    2015-01-01

    This paper presents improvements to the conventional Topology Representing Network to build more appropriate topology relationships. Based on this improved Topology Representing Network, we propose a novel method for online dimensionality reduction that integrates the improved Topology Representing Network and Radial Basis Function Network. This method can find meaningful low-dimensional feature structures embedded in high-dimensional original data space, process nonlinear embedded manifolds, and map the new data online. Furthermore, this method can deal with large datasets for the benefit of improved Topology Representing Network. Experiments illustrate the effectiveness of the proposed method.

  14. Xarray: multi-dimensional data analysis in Python

    Science.gov (United States)

    Hoyer, Stephan; Hamman, Joe; Maussion, Fabien

    2017-04-01

    xarray (http://xarray.pydata.org) is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays, which are the bread and butter of modern geoscientific data analysis. Key features of the package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib, Cartopy), out-of-core computation on datasets that don't fit into memory, a wide range of input/output options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. In this contribution we will present the key features of the library and demonstrate its great potential for a wide range of applications, from (big-)data processing on super computers to data exploration in front of a classroom.

  15. Development of MARS for multi-dimensional and multi-purpose thermal-hydraulic system analysis

    International Nuclear Information System (INIS)

    Lee, Won Jae; Chung, Bub Dong; Kim, Kyung Doo; Hwang, Moon Kyu; Jeong, Jae Jun; Ha, Kwi Seok; Joo, Han Gyu

    2000-01-01

    MARS (Multi-dimensional Analysis of Reactor Safety) code is being developed by KAERI for the realistic thermal-hydraulic simulation of light water reactor system transients. MARS 1.4 has been developed as a final version of basic code frame for the multi-dimensional analysis of system thermal-hydraulics. Since MARS 1.3, MARS 1.4 has been improved to have the enhanced code capability and user friendliness through the unification of input/output features, code models and code functions, and through the code modernization. Further improvements of thermal-hydraulic models, numerical method and user friendliness are being carried out for the enhanced code accuracy. As a multi-purpose safety analysis code system, a coupled analysis system, MARS/MASTER/CONTEMPT, has been developed using multiple DLL (Dynamic Link Library) techniques of Windows system. This code system enables the coupled, that is, more realistic analysis of multi-dimensional thermal-hydraulics (MARS 2.0), three-dimensional core kinetics (MASTER) and containment thermal-hydraulics (CONTEMPT). This paper discusses the MARS development program, and the developmental progress of the MARS 1.4 and the MARS/MASTER/CONTEMPT focusing on major features of the codes and their verification. It also discusses thermal hydraulic models and new code features under development. (author)

  16. Development of MARS for multi-dimensional and multi-purpose thermal-hydraulic system analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Chung, Bub Dong; Kim, Kyung Doo; Hwang, Moon Kyu; Jeong, Jae Jun; Ha, Kwi Seok; Joo, Han Gyu [Korea Atomic Energy Research Institute, T/H Safety Research Team, Yusung, Daejeon (Korea)

    2000-10-01

    MARS (Multi-dimensional Analysis of Reactor Safety) code is being developed by KAERI for the realistic thermal-hydraulic simulation of light water reactor system transients. MARS 1.4 has been developed as a final version of basic code frame for the multi-dimensional analysis of system thermal-hydraulics. Since MARS 1.3, MARS 1.4 has been improved to have the enhanced code capability and user friendliness through the unification of input/output features, code models and code functions, and through the code modernization. Further improvements of thermal-hydraulic models, numerical method and user friendliness are being carried out for the enhanced code accuracy. As a multi-purpose safety analysis code system, a coupled analysis system, MARS/MASTER/CONTEMPT, has been developed using multiple DLL (Dynamic Link Library) techniques of Windows system. This code system enables the coupled, that is, more realistic analysis of multi-dimensional thermal-hydraulics (MARS 2.0), three-dimensional core kinetics (MASTER) and containment thermal-hydraulics (CONTEMPT). This paper discusses the MARS development program, and the developmental progress of the MARS 1.4 and the MARS/MASTER/CONTEMPT focusing on major features of the codes and their verification. It also discusses thermal hydraulic models and new code features under development. (author)

  17. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Energy Technology Data Exchange (ETDEWEB)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon [Dept. of Radiology, Korea University Anam Hospital, Seoul (Korea, Republic of); Suh, Sangil; Ryoo, In Seon; Seol, Hae Young [Dept. of Radiology, Korea University Guro Hospital, Seoul (Korea, Republic of); Lee, Young Hen; Seo, Hyung Suk [Dept. of Radiology, Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2017-01-15

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  18. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    International Nuclear Information System (INIS)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon; Suh, Sangil; Ryoo, In Seon; Seol, Hae Young; Lee, Young Hen; Seo, Hyung Suk

    2017-01-01

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism

  19. Investigation on methods for dimensionality reduction on hyperspectral image data

    OpenAIRE

    Robin T. Clarke; Vitor Haertel; Maciel Zortea

    2005-01-01

    In the present study, we propose a new simple approach to reduce the dimensionality in hyperspectral image data. The basic assumption consists in assuming that a pixel’s curve of spectral response, as defined in the spectral space by the recorded digital numbers (DNs) at the available spectral bands, can be segmented and each segment can be replaced by a smaller number of statistics, e.g., the mean and the variance, describing the main characteristics of a pixel’s spectral response. Results s...

  20. A new statistical framework for genetic pleiotropic analysis of high dimensional phenotype data.

    Science.gov (United States)

    Wang, Panpan; Rahman, Mohammad; Jin, Li; Xiong, Momiao

    2016-11-07

    The widely used genetic pleiotropic analyses of multiple phenotypes are often designed for examining the relationship between common variants and a few phenotypes. They are not suited for both high dimensional phenotypes and high dimensional genotype (next-generation sequencing) data. To overcome limitations of the traditional genetic pleiotropic analysis of multiple phenotypes, we develop sparse structural equation models (SEMs) as a general framework for a new paradigm of genetic analysis of multiple phenotypes. To incorporate both common and rare variants into the analysis, we extend the traditional multivariate SEMs to sparse functional SEMs. To deal with high dimensional phenotype and genotype data, we employ functional data analysis and the alternative direction methods of multiplier (ADMM) techniques to reduce data dimension and improve computational efficiency. Using large scale simulations we showed that the proposed methods have higher power to detect true causal genetic pleiotropic structure than other existing methods. Simulations also demonstrate that the gene-based pleiotropic analysis has higher power than the single variant-based pleiotropic analysis. The proposed method is applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) with 11 phenotypes, which identifies a network with 137 genes connected to 11 phenotypes and 341 edges. Among them, 114 genes showed pleiotropic genetic effects and 45 genes were reported to be associated with phenotypes in the analysis or other cardiovascular disease (CVD) related phenotypes in the literature. Our proposed sparse functional SEMs can incorporate both common and rare variants into the analysis and the ADMM algorithm can efficiently solve the penalized SEMs. Using this model we can jointly infer genetic architecture and casual phenotype network structure, and decompose the genetic effect into direct, indirect and total effect. Using large scale simulations we showed that the proposed

  1. Basic methods of isotope analysis

    International Nuclear Information System (INIS)

    Ochkin, A.V.; Rozenkevich, M.B.

    2000-01-01

    The bases of the most applied methods of the isotope analysis are briefly presented. The possibilities and analytical characteristics of the mass-spectrometric, spectral, radiochemical and special methods of the isotope analysis, including application of the magnetic resonance, chromatography and refractometry, are considered [ru

  2. Three-dimensional stress analysis of plain weave composites

    Science.gov (United States)

    Whitcomb, John D.

    1989-01-01

    Techniques were developed and described for performing three-dimensional finite element analysis of plain weave composites. Emphasized here are aspects of the analysis which are different from analysis of traditional laminated composites, such as the mesh generation and representative unit cells. The analysis was used to study several different variations of plain weaves which illustrate the effects of tow waviness on composite moduli, Poisson's ratios, and internal strain distributions. In-plane moduli decreased almost linearly with increasing tow waviness. The tow waviness was shown to cause large normal and shear strain concentrations in composites subjected to uniaxial load. These strain concentrations may lead to earlier damage initiation than occurs in traditional cross-ply laminates.

  3. Automated High-Dimensional Flow Cytometric Data Analysis

    Science.gov (United States)

    Pyne, Saumyadipta; Hu, Xinli; Wang, Kui; Rossin, Elizabeth; Lin, Tsung-I.; Maier, Lisa; Baecher-Allan, Clare; McLachlan, Geoffrey; Tamayo, Pablo; Hafler, David; de Jager, Philip; Mesirov, Jill

    Flow cytometry is widely used for single cell interrogation of surface and intracellular protein expression by measuring fluorescence intensity of fluorophore-conjugated reagents. We focus on the recently developed procedure of Pyne et al. (2009, Proceedings of the National Academy of Sciences USA 106, 8519-8524) for automated high- dimensional flow cytometric analysis called FLAME (FLow analysis with Automated Multivariate Estimation). It introduced novel finite mixture models of heavy-tailed and asymmetric distributions to identify and model cell populations in a flow cytometric sample. This approach robustly addresses the complexities of flow data without the need for transformation or projection to lower dimensions. It also addresses the critical task of matching cell populations across samples that enables downstream analysis. It thus facilitates application of flow cytometry to new biological and clinical problems. To facilitate pipelining with standard bioinformatic applications such as high-dimensional visualization, subject classification or outcome prediction, FLAME has been incorporated with the GenePattern package of the Broad Institute. Thereby analysis of flow data can be approached similarly as other genomic platforms. We also consider some new work that proposes a rigorous and robust solution to the registration problem by a multi-level approach that allows us to model and register cell populations simultaneously across a cohort of high-dimensional flow samples. This new approach is called JCM (Joint Clustering and Matching). It enables direct and rigorous comparisons across different time points or phenotypes in a complex biological study as well as for classification of new patient samples in a more clinical setting.

  4. Retinal cartography. An analysis of two-dimensional and three-dimensional mapping of the retina.

    Science.gov (United States)

    Borodkin, M J; Thompson, J T

    1992-01-01

    The current two-dimensional (2D) retinal drawing chart is an azimuth equidistant representation of the retina. The distortion produced by this chart was analyzed and compared to other 2D projections, such as stereographic, equal area, and orthographic maps of the retina. Circumferential distortion was calculated for lesions at varying distances from the macula using the azimuth equidistant retinal map and was found to increase exponentially as a function of the distance from the macula. Circumferential distortion was 57.1% at the equator, 88.5% 3 mm anterior to the equator, and 137.8% 6 mm anterior to the equator. A three-dimensional (3D) model of the retinal surface was created using 3D computer assisted design (CAD) software. This 3D model was able to represent retinal lesions such that their true size, shape, location, and orientation were all conserved. Retinal lesions could be viewed from multiple angles and examined in cross section. The use of 3D CAD software coupled with ultrasound or magnetic resonance imaging data has the potential to represent retinal lesions more accurately than current methods.

  5. Efficient evaluation of Casimir force in arbitrary three-dimensional geometries by integral equation methods

    International Nuclear Information System (INIS)

    Xiong, Jie L.; Tong, M.S.; Atkins, Phillip; Chew, W.C.

    2010-01-01

    In this Letter, we generalized the surface integral equation method for the evaluation of Casimir force in arbitrary three-dimensional geometries. Similar to the two-dimensional case, the evaluation of the mean Maxwell stress tensor is cast into solving a series of three-dimensional scattering problems. The formulation and solution of the three-dimensional scattering problems are well-studied in classical computational electromagnetics. This Letter demonstrates that this quantum electrodynamic phenomenon can be studied using the knowledge and techniques of classical electrodynamics.

  6. One-dimensional calculation of flow branching using the method of characteristics

    International Nuclear Information System (INIS)

    Meier, R.W.; Gido, R.G.

    1978-05-01

    In one-dimensional flow systems, the flow often branches, such as at a tee or manifold. The study develops a formulation for calculating the flow through branch points with one-dimensional method of characteristics equations. The resultant equations were verified by comparison with experimental measurements

  7. Method and system for manipulating a digital representation of a three-dimensional object

    DEFF Research Database (Denmark)

    2010-01-01

    A method of manipulating a three-dimensional virtual building block model by means of two-dimensional cursor movements, the virtual building block model including a plurality of virtual building blocks each including a number of connection elements for connecting the virtual building block with a...

  8. [New progress on three-dimensional movement measurement analysis of human spine].

    Science.gov (United States)

    Qiu, Xiao-wen; He, Xi-jing; Huang, Si-hua; Liang, Bao-bao; Yu, Zi-rui

    2015-05-01

    Spinal biomechanics, especially the range of spine motion,has close connection with spinal surgery. The change of the range of motion (ROM) is an important indicator of diseases and injuries of spine, and the essential evaluating standards of effect of surgeries and therapies to spine. The analysis of ROM can be dated to the time of the invention of X-ray and even that before it. With the development of science and technology as well as the optimization of various types of calculation methods, diverse measuring methods have emerged, from imaging methods to non-imaging methods, from two-dimensional to three-dimensional, from measuring directly on the X-ray films to calculating automatically by computer. Analysis of ROM has made great progress, but there are some older methods cannot meet the needs of the times and disappear, some classical methods such as X-ray still have vitality. Combining different methods, three dimensions and more vivo spine research are the trend of analysis of ROM. And more and more researchers began to focus on vivo spine research. In this paper, the advantages and disadvantages of the methods utilized recently are presented through viewing recent literatures, providing reference and help for the movement analysis of spine.

  9. Pulmonary vasculature in dogs assessed by three-dimensional fractal analysis and chemometrics

    DEFF Research Database (Denmark)

    Müller, Anna V; Marschner, Clara B; Kristensen, Annemarie T

    2017-01-01

    Fractal analysis of canine pulmonary vessels could allow quantification of their space-filling properties. Aims of this prospective, analytical, cross-sectional study were to describe methods for reconstructing three dimensional pulmonary arterial vascular trees from computed tomographic pulmonary...... angiogram, applying fractal analyses of these vascular trees in dogs with and without diseases that are known to predispose to thromboembolism, and testing the hypothesis that diseased dogs would have a different fractal dimension than healthy dogs. A total of 34 dogs were sampled. Based on computed...... for each dog using a semiautomated segmentation technique. Vascular three-dimensional reconstructions were then evaluated using fractal analysis. Fractal dimensions were analyzed, by group, using analysis of variance and principal component analysis. Fractal dimensions were significantly different among...

  10. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  11. Comparison of two three-dimensional cephalometric analysis computer software.

    Science.gov (United States)

    Sawchuk, Dena; Alhadlaq, Adel; Alkhadra, Thamer; Carlyle, Terry D; Kusnoto, Budi; El-Bialy, Tarek

    2014-10-01

    Three-dimensional cephalometric analyses are getting more attraction in orthodontics. The aim of this study was to compare two softwares to evaluate three-dimensional cephalometric analyses of orthodontic treatment outcomes. Twenty cone beam computed tomography images were obtained using i-CAT(®) imaging system from patient's records as part of their regular orthodontic records. The images were analyzed using InVivoDental5.0 (Anatomage Inc.) and 3DCeph™ (University of Illinois at Chicago, Chicago, IL, USA) software. Before and after orthodontic treatments data were analyzed using t-test. Reliability test using interclass correlation coefficient was stronger for InVivoDental5.0 (0.83-0.98) compared with 3DCeph™ (0.51-0.90). Paired t-test comparison of the two softwares shows no statistical significant difference in the measurements made in the two softwares. InVivoDental5.0 measurements are more reproducible and user friendly when compared to 3DCeph™. No statistical difference between the two softwares in linear or angular measurements. 3DCeph™ is more time-consuming in performing three-dimensional analysis compared with InVivoDental5.0.

  12. A three-dimensional layerwise-differential quadrature free vibration analysis of laminated cylindrical shells

    International Nuclear Information System (INIS)

    Malekzadeh, P.; Farid, M.; Zahedinejad, P.

    2008-01-01

    A mixed layerwise theory and differential quadrature (DQ) method (LW-DQ) for three-dimensional free vibration analysis of arbitrary laminated circular cylindrical shells is introduced. Using the layerwise theory in conjunction with the three-dimensional form of Hamilton's principle, the transversely discretized equations of motion and the related boundary conditions are obtained. Then, the DQ method is employed to discretize the resulting equations in the axial directions. The fast convergence behavior of the method is demonstrated and its accuracy is verified by comparing the results with those of other shell theories obtained using conventional methods and also with those of ANSYS software. In the case of arbitrary laminated shells with simply supported ends, the exact solution is developed for comparison purposes. It is shown that using few DQ grid points, converged accurate solutions are obtained. Less computational efforts of the proposed approach with respect to ANSYS software is shown

  13. Statistical mechanical analysis of (1 + infinity) dimensional disordered systems

    CERN Document Server

    Skantzos, N S

    2001-01-01

    Valuable insight into the theory of disordered systems and spin-glasses has been offered by two classes of exactly solvable models: one-dimensional models and mean-field (infinite-range) ones, which, each carry their own specific techniques and restrictions. Both classes of models are now considered as 'exactly solvable' in the sense that in the thermodynamic limit the partition sum can been carried out analytically and the average over the disorder can be performed using methods which are well understood. In this thesis I study equilibrium properties of spin systems with a combination of one-dimensional short- and infinite-range interactions. I find that such systems, under either synchronous or asynchronous spin dynamics, and even in the absence of disorder, lead to phase diagrams with first-order transitions and regions with a multiple number of locally stable states. I then proceed to the study of recurrent neural network models with (1+infinity)-dimensional interactions, and find that the competing short...

  14. Numerical methods and analysis of multiscale problems

    CERN Document Server

    Madureira, Alexandre L

    2017-01-01

    This book is about numerical modeling of multiscale problems, and introduces several asymptotic analysis and numerical techniques which are necessary for a proper approximation of equations that depend on different physical scales. Aimed at advanced undergraduate and graduate students in mathematics, engineering and physics – or researchers seeking a no-nonsense approach –, it discusses examples in their simplest possible settings, removing mathematical hurdles that might hinder a clear understanding of the methods. The problems considered are given by singular perturbed reaction advection diffusion equations in one and two-dimensional domains, partial differential equations in domains with rough boundaries, and equations with oscillatory coefficients. This work shows how asymptotic analysis can be used to develop and analyze models and numerical methods that are robust and work well for a wide range of parameters.

  15. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    Science.gov (United States)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  16. Discretization errors associated with Reproducing Kernel Methods: One-dimensional domains

    Energy Technology Data Exchange (ETDEWEB)

    Voth, T.E.; Christon, M.A.

    2000-01-10

    The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.

  17. Energy method for multi-dimensional balance laws with non-local dissipation

    KAUST Repository

    Duan, Renjun

    2010-06-01

    In this paper, we are concerned with a class of multi-dimensional balance laws with a non-local dissipative source which arise as simplified models for the hydrodynamics of radiating gases. At first we introduce the energy method in the setting of smooth perturbations and study the stability of constants states. Precisely, we use Fourier space analysis to quantify the energy dissipation rate and recover the optimal time-decay estimates for perturbed solutions via an interpolation inequality in Fourier space. As application, the developed energy method is used to prove stability of smooth planar waves in all dimensions n2, and also to show existence and stability of time-periodic solutions in the presence of the time-periodic source. Optimal rates of convergence of solutions towards the planar waves or time-periodic states are also shown provided initially L1-perturbations. © 2009 Elsevier Masson SAS.

  18. Numerical method for three dimensional steady-state two-phase flow calculations

    International Nuclear Information System (INIS)

    Raymond, P.; Toumi, I.

    1992-01-01

    This paper presents the numerical scheme which was developed for the FLICA-4 computer code to calculate three dimensional steady state two phase flows. This computer code is devoted to steady state and transient thermal hydraulics analysis of nuclear reactor cores 1,3 . The first section briefly describes the FLICA-4 flow modelling. Then in order to introduce the numerical method for steady state computations, some details are given about the implicit numerical scheme based upon an approximate Riemann solver which was developed for calculation of flow transients. The third section deals with the numerical method for steady state computations, which is derived from this previous general scheme and its optimization. We give some numerical results for steady state calculations and comparisons on required CPU time and memory for various meshing and linear system solvers

  19. Dimensionality reduction for the analysis of brain oscillations.

    Science.gov (United States)

    Haufe, Stefan; Dähne, Sven; Nikulin, Vadim V

    2014-11-01

    Neuronal oscillations have been shown to be associated with perceptual, motor and cognitive brain operations. While complex spatio-temporal dynamics are a hallmark of neuronal oscillations, they also represent a formidable challenge for the proper extraction and quantification of oscillatory activity with non-invasive recording techniques such as EEG and MEG. In order to facilitate the study of neuronal oscillations we present a general-purpose pre-processing approach, which can be applied for a wide range of analyses including but not restricted to inverse modeling and multivariate single-trial classification. The idea is to use dimensionality reduction with spatio-spectral decomposition (SSD) instead of the commonly and almost exclusively used principal component analysis (PCA). The key advantage of SSD lies in selecting components explaining oscillations-related variance instead of just any variance as in the case of PCA. For the validation of SSD pre-processing we performed extensive simulations with different inverse modeling algorithms and signal-to-noise ratios. In all these simulations SSD invariably outperformed PCA often by a large margin. Moreover, using a database of multichannel EEG recordings from 80 subjects we show that pre-processing with SSD significantly increases the performance of single-trial classification of imagined movements, compared to the classification with PCA pre-processing or without any dimensionality reduction. Our simulations and analysis of real EEG experiments show that, while not being supervised, the SSD algorithm is capable of extracting components primarily relating to the signal of interest often using as little as 20% of the data variance, instead of > 90% variance as in case of PCA. Given its ease of use, absence of supervision, and capability to efficiently reduce the dimensionality of multivariate EEG/MEG data, we advocate the application of SSD pre-processing for the analysis of spontaneous and induced neuronal

  20. HDBStat!: A platform-independent software suite for statistical analysis of high dimensional biology data

    Directory of Open Access Journals (Sweden)

    Brand Jacob PL

    2005-04-01

    Full Text Available Abstract Background Many efforts in microarray data analysis are focused on providing tools and methods for the qualitative analysis of microarray data. HDBStat! (High-Dimensional Biology-Statistics is a software package designed for analysis of high dimensional biology data such as microarray data. It was initially developed for the analysis of microarray gene expression data, but it can also be used for some applications in proteomics and other aspects of genomics. HDBStat! provides statisticians and biologists a flexible and easy-to-use interface to analyze complex microarray data using a variety of methods for data preprocessing, quality control analysis and hypothesis testing. Results Results generated from data preprocessing methods, quality control analysis and hypothesis testing methods are output in the form of Excel CSV tables, graphs and an Html report summarizing data analysis. Conclusion HDBStat! is a platform-independent software that is freely available to academic institutions and non-profit organizations. It can be downloaded from our website http://www.soph.uab.edu/ssg_content.asp?id=1164.

  1. A modified cubic B-spline differential quadrature method for three-dimensional non-linear diffusion equations

    Science.gov (United States)

    Dahiya, Sumita; Mittal, Ramesh Chandra

    2017-07-01

    This paper employs a differential quadrature scheme for solving non-linear partial differential equations. Differential quadrature method (DQM), along with modified cubic B-spline basis, has been adopted to deal with three-dimensional non-linear Brusselator system, enzyme kinetics of Michaelis-Menten type problem and Burgers' equation. The method has been tested efficiently to three-dimensional equations. Simple algorithm and minimal computational efforts are two of the major achievements of the scheme. Moreover, this methodology produces numerical solutions not only at the knot points but also at every point in the domain under consideration. Stability analysis has been done. The scheme provides convergent approximate solutions and handles different cases and is particularly beneficial to higher dimensional non-linear PDEs with irregularities in initial data or initial-boundary conditions that are discontinuous in nature, because of its capability of damping specious oscillations induced by high frequency components of solutions.

  2. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  3. Tag gas burnup based on three-dimensional FTR analysis

    International Nuclear Information System (INIS)

    Kidman, R.B.

    1976-01-01

    Flux spectra from a three-dimensional diffusion theory analysis of the Fast Test Reactor (FTR) are used to predict gas tag ratio changes, as a function of exposure, for each FTR fuel and absorber subassembly plenum. These flux spectra are also used to predict Xe-125 equilibrium activities in absorber plena in order to assess the feasibility of using Xe-125 gamma rays to detect and distinguish control rod failures from fuel rod failures. Worst case tag burnup changes are used in conjunction with burnup and mass spectrometer uncertainties to establish the minimum spacing of tags which allows the tags to be unambiguously identified

  4. Analysis and visualization of complex unsteady three-dimensional flows

    Science.gov (United States)

    Van Dalsem, William R.; Buning, Pieter G.; Dougherty, F. Carroll; Smith, Merritt H.

    1989-01-01

    Flow field animation is the natural choice as a tool in the analysis of the numerical simulations of complex unsteady three-dimensional flows. The PLOT4D extension of the widely used PLOT3D code to allow the interactive animation of a broad range of flow variables was developed and is presented. To allow direct comparison with unsteady experimental smoke and dye flow visualization, the code STREAKER was developed to produce time accurate streaklines. Considerations regarding the development of PLOT4D and STREAKER, and example results are presented.

  5. Análise fractal da vasculatura retínica: métodos de segmentação e de cálculo dimensional Fractal analysis of retinal vascular tree: segmentation and estimation methods

    Directory of Open Access Journals (Sweden)

    Marcelo Bezerra de Melo de Mendonça

    2007-06-01

    Full Text Available OBJETIVO: Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. MÉTODOS: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais ("multithreshold", "scale-space", "pixel classification" e "ridge based detection". Suas dimensões fractais de "informação", de "massa-raio" e "por contagem de caixas" foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo. RESULTADOS: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. CONCLUSÃO: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados.PURPOSE: Although it has been proposed that retinal vasculature is fractal, no method of standardization has been performed for vascular segmentation or for dimension calculation, thus resulting in great variability among values of fractal dimensions. The present study was designed to determine if estimation of retinal vessel fractal dimensions is dependent on vascular segmentation and dimensional calculation

  6. Dimensionality reduction using Principal Component Analysis for network intrusion detection

    Directory of Open Access Journals (Sweden)

    K. Keerthi Vasan

    2016-09-01

    Full Text Available Intrusion detection is the identification of malicious activities in a given network by analyzing its traffic. Data mining techniques used for this analysis study the traffic traces and identify hostile flows in the traffic. Dimensionality reduction in data mining focuses on representing data with minimum number of dimensions such that its properties are not lost and hence reducing the underlying complexity in processing the data. Principal Component Analysis (PCA is one of the prominent dimensionality reduction techniques widely used in network traffic analysis. In this paper, we focus on the efficiency of PCA for intrusion detection and determine its Reduction Ratio (RR, ideal number of Principal Components needed for intrusion detection and the impact of noisy data on PCA. We carried out experiments with PCA using various classifier algorithms on two benchmark datasets namely, KDD CUP and UNB ISCX. Experiments show that the first 10 Principal Components are effective for classification. The classification accuracy for 10 Principal Components is about 99.7% and 98.8%, nearly same as the accuracy obtained using original 41 features for KDD and 28 features for ISCX, respectively.

  7. Non-Dimensional Analysis of a Two-Dimensional Beam Using Linear Stiffness Matrix in Absolute Nodal Coordinate Formulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kun Woo; Lee, Jae Wook; Jang, Jin Seok; Oh, Joo Young [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of); Kim, Hyung Ryul [Agency for Defense Development, Daejeon (Korea, Republic of); Kang, Ji Heon; Yoo, Wan Suk [Pusan Nat’l Univ., Busan (Korea, Republic of)

    2017-01-15

    Absolute nodal coordinate formulation was developed in the mid-1990s, and is used in the flexible dynamic analysis. In the process of deriving the equation of motion, if the order of polynomial referring to the displacement field increases, then the degrees of freedom increase, as well as the analysis time increases. Therefore, in this study, the primary objective was to reduce the analysis time by transforming the dimensional equation of motion to a nondimensional equation of motion. After the shape function was rearranged to be non-dimensional and the nodal coordinate was rearranged to be in length dimension, the non-dimensional mass matrix, stiffness matrix, and conservative force was derived from the non-dimensional variables. The verification and efficiency of this nondimensional equation of motion was performed using two examples; cantilever beam which has the exact solution about static deflection and flexible pendulum.

  8. Three-dimensional thermal-structural analysis of a swept cowl leading edge subjected to skewed shock-shock interference heating

    Science.gov (United States)

    Polesky, Sandra P.; Dechaumphai, Pramote; Glass, Christopher E.; Pandey, Ajay K.

    1990-01-01

    A three-dimensional flux-based thermal analysis method has been developed and its capability is demonstrated by predicting the transient nonlinear temperature response of a swept cowl leading edge subjected to intense three-dimensional aerodynamic heating. The predicted temperature response from the transient thermal analysis is used in a linear elastic structural analysis to determine thermal stresses. Predicted thermal stresses are compared with those obtained from a two-dimensional analysis which represents conditions along the chord where maximum heating occurs. Results indicate a need for a three-dimensional analysis to predict accurately the leading edge thermal stress response.

  9. A computational method for the solution of one-dimensional ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, one of the newest analytical methods, new homotopy perturbation method. (NHPM), is considered to solve thermoelasticity equations. Results obtained by NHPM, which does not need small parameters, are compared with the numerical results and a very good agreement is found. This method ...

  10. Field analysis of two-dimensional focusing grating

    OpenAIRE

    Borsboom, P.P.; Frankena, H.J.

    1995-01-01

    The method that we have developed [P-P. Borsboom, Ph.D. dissertation (Delft University of Technology, Delft, The Netherlands); P-P. Borsboom and H. J. Frankena, J. Opt. Soc. Am. A 12, 1134–1141 (1995)] is successfully applied to a two-dimensional focusing grating coupler. The field in the focal region has been determined for symmetrical chirped gratings consisting of as many as 124 corrugations. The intensity distribution in the focal region agrees well with the approximate predictions of geo...

  11. Robust Adaptive Lasso method for parameter's estimation and variable selection in high-dimensional sparse models.

    Directory of Open Access Journals (Sweden)

    Abdul Wahid

    Full Text Available High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL method is proposed which is based on pearson residuals weighting scheme. The weight function determines the compatibility of each observations and downweight it if they are inconsistent with the assumed model. It is observed that RAL estimator can correctly select the covariates with non-zero coefficients and can estimate parameters, simultaneously, not only in the presence of influential observations, but also in the presence of high multicolliearity. We also discuss the model selection oracle property and the asymptotic normality of the RAL. Simulations findings and real data examples also demonstrate the better performance of the proposed penalized regression approach.

  12. Effective method for construction of low-dimensional models for heat transfer process

    Energy Technology Data Exchange (ETDEWEB)

    Blinov, D.G.; Prokopov, V.G.; Sherenkovskii, Y.V.; Fialko, N.M.; Yurchuk, V.L. [National Academy of Sciences of Ukraine, Kiev (Ukraine). Inst. of Engineering Thermophysics

    2004-12-01

    A low-dimensional model based on the method of proper orthogonal decomposition (POD) and the method of polyargumental systems (MPS) for thermal conductivity problems with strongly localized source of heat has been presented. The key aspect of these methods is that they enable to avoid weak points of other projection methods, which consists in a priori choice of basis functions. It enables us to use the MPS method and the POD method as convenient means to construct low-dimensional models of heat and mass transfer problems. (Author)

  13. Reduction of dimensionality in dynamic programming-based solution methods for nonlinear integer programming

    Directory of Open Access Journals (Sweden)

    Balasubramanian Ram

    1988-01-01

    Full Text Available This paper suggests a method of formulating any nonlinear integer programming problem, with any number of constraints, as an equivalent single constraint problem, thus reducing the dimensionality of the associated dynamic programming problem.

  14. 76 FR 18769 - Prospective Grant of Exclusive License: Device and System for Two Dimensional Analysis of...

    Science.gov (United States)

    2011-04-05

    ... for Two Dimensional Analysis of Biomolecules From Tissue and Other Samples AGENCY: National Institutes... services for high throughput parallel analysis and two dimensional analyses of molecules for all uses...- Dimensional Array;'' U.S. Patent Application No. 10/535,521 [HHS Ref. No. E-339-2002/0-US-03], filed May 18...

  15. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G.R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V.E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies 'such as efficient data management' supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  16. Analysis of Precision of Activation Analysis Method

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Nørgaard, K.

    1973-01-01

    The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T...

  17. Comparative performance of fractal based and conventional methods for dimensionality reduction of hyperspectral data

    Science.gov (United States)

    Mukherjee, Kriti; Bhattacharya, Atanu; Ghosh, Jayanta Kumar; Arora, Manoj K.

    2014-04-01

    Although hyperspectral images contain a wealth of information due to its fine spectral resolution, the information is often redundant. It is therefore expedient to reduce the dimensionality of the data without losing significant information content. The aim of this paper is to show that proposed fractal based dimensionality reduction applied on high dimensional hyperspectral data can be proved to be a better alternative compared to some other popular conventional methods when similar classification accuracy is desired at a reduced computational complexity. Amongst a number of methods of computing fractal dimension, three have been applied here. The experiments have been performed on two hyperspectral data sets acquired from AVIRIS sensor.

  18. Applying clustering to statistical analysis of student reasoning about two-dimensional kinematics

    Directory of Open Access Journals (Sweden)

    R. Padraic Springuel

    2007-12-01

    Full Text Available We use clustering, an analysis method not presently common to the physics education research community, to group and characterize student responses to written questions about two-dimensional kinematics. Previously, clustering has been used to analyze multiple-choice data; we analyze free-response data that includes both sketches of vectors and written elements. The primary goal of this paper is to describe the methodology itself; we include a brief overview of relevant results.

  19. Analysis apparatus and method of analysis

    International Nuclear Information System (INIS)

    1976-01-01

    A continuous streaming method developed for the excution of immunoassays is described in this patent. In addition, a suitable apparatus for the method was developed whereby magnetic particles are automatically employed for the consecutive analysis of a series of liquid samples via the RIA technique

  20. Investigation on methods for dimensionality reduction on hyperspectral image data

    Directory of Open Access Journals (Sweden)

    Robin T. Clarke

    2005-04-01

    Full Text Available In the present study, we propose a new simple approach to reduce the dimensionality in hyperspectral image data. The basic assumption consists in assuming that a pixel’s curve of spectral response, as defined in the spectral space by the recorded digital numbers (DNs at the available spectral bands, can be segmented and each segment can be replaced by a smaller number of statistics, e.g., the mean and the variance, describing the main characteristics of a pixel’s spectral response. Results suggest that this procedure can be accomplished without significant loss of information. The DNs at every spectral band can be used to estimate a few statistical parameters that will replace them in a classifier. For the pixel’s spectral curve segmentation, three sub-optimal algorithms are proposed, being easy to implement and also computationally efficient. Using a top-down strategy, the length of the segments along the spectral curves can be adjusted sequentially. Experiments using a parametric classifier are performed using an AVIRIS data set. Encouraging results have been obtained in terms of classification accuracy and execution time, suggesting the effectiveness of the proposed algorithms.

  1. Nonlinear programming analysis and methods

    CERN Document Server

    Avriel, Mordecai

    2012-01-01

    This text provides an excellent bridge between principal theories and concepts and their practical implementation. Topics include convex programming, duality, generalized convexity, analysis of selected nonlinear programs, techniques for numerical solutions, and unconstrained optimization methods.

  2. Chemical methods of rock analysis

    National Research Council Canada - National Science Library

    Jeffery, P. G; Hutchison, D

    1981-01-01

    A practical guide to the methods in general use for the complete analysis of silicate rock material and for the determination of all those elements present in major, minor or trace amounts in silicate...

  3. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Yun Mo

    2014-01-01

    Full Text Available Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect.

  4. Three-dimensional velocity obstacle method for UAV deconflicting maneuvers

    NARCIS (Netherlands)

    Jenie, Y.I.; Van Kampen, E.J.; De Visser, C.C.; Ellerbroek, J.; Hoekstra, J.M.

    2015-01-01

    Autonomous systems are required in order to enable UAVs to conduct self-separation and collision avoidance, especially for flights within the civil airspace system. A method called the Velocity Obstacle Method can provide the necessary situational awareness for UAVs in a dynamic environment, and can

  5. Regularization, Renormalization, and Dimensional Analysis: Dimensional Regularization meets Freshman E&M

    OpenAIRE

    Olness, Fredrick; Scalise, Randall

    2008-01-01

    We illustrate the dimensional regularization technique using a simple problem from elementary electrostatics. We contrast this approach with the cutoff regularization approach, and demonstrate that dimensional regularization preserves the translational symmetry. We then introduce a Minimal Subtraction (MS) and a Modified Minimal Subtraction (MS-Bar) scheme to renormalize the result. Finally, we consider dimensional transmutation as encountered in the case of compact extra-dimensions.

  6. High dimensional reflectance analysis of soil organic matter

    Science.gov (United States)

    Henderson, T. L.; Baumgardner, M. F.; Franzmeier, D. P.; Stott, D. E.; Coster, D. C.

    1992-01-01

    Recent breakthroughs in remote-sensing technology have led to the development of high spectral resolution imaging sensors for observation of earth surface features. This research was conducted to evaluate the effects of organic matter content and composition on narrowband soil reflectance across the visible and reflective infrared spectral ranges. Organic matter from four Indiana agricultural soils, ranging in organic C content from 0.99 to 1.72 percent, was extracted, fractionated, and purified. Six components of each soil were isolated and prepared for spectral analysis. Reflectance was measured in 210 narrow bands in the 400- to 2500-nm wavelength range. Statistical analysis of reflectance values indicated the potential of high dimensional reflectance data in specific visible, near-infrared, and middle-infrared bands to provide information about soil organic C content, but not organic matter composition. These bands also responded significantly to Fe- and Mn-oxide content.

  7. Dimensional reduction of the master equation for stochastic chemical networks: The reduced-multiplane method

    Science.gov (United States)

    Barzel, Baruch; Biham, Ofer; Kupferman, Raz; Lipshtat, Azi; Zait, Amir

    2010-08-01

    Chemical reaction networks which exhibit strong fluctuations are common in microscopic systems in which reactants appear in low copy numbers. The analysis of these networks requires stochastic methods, which come in two forms: direct integration of the master equation and Monte Carlo simulations. The master equation becomes infeasible for large networks because the number of equations increases exponentially with the number of reactive species. Monte Carlo methods, which are more efficient in integrating over the exponentially large phase space, also become impractical due to the large amounts of noisy data that need to be stored and analyzed. The recently introduced multiplane method [A. Lipshtat and O. Biham, Phys. Rev. Lett. 93, 170601 (2004)10.1103/PhysRevLett.93.170601] is an efficient framework for the stochastic analysis of large reaction networks. It is a dimensional reduction method, based on the master equation, which provides a dramatic reduction in the number of equations without compromising the accuracy of the results. The reduction is achieved by breaking the network into a set of maximal fully connected subnetworks (maximal cliques). A separate master equation is written for the reduced probability distribution associated with each clique, with suitable coupling terms between them. This method is highly efficient in the case of sparse networks, in which the maximal cliques tend to be small. However, in dense networks some of the cliques may be rather large and the dimensional reduction is not as effective. Furthermore, the derivation of the multiplane equations from the master equation is tedious and difficult. Here we present the reduced-multiplane method in which the maximal cliques are broken down to the fundamental two-vertex cliques. The number of equations is further reduced, making the method highly efficient even for dense networks. Moreover, the equations take a simpler form, which can be easily constructed using a diagrammatic procedure

  8. "Divide and conquer" semiclassical molecular dynamics: A practical method for spectroscopic calculations of high dimensional molecular systems

    Science.gov (United States)

    Di Liberto, Giovanni; Conte, Riccardo; Ceotto, Michele

    2018-01-01

    We extensively describe our recently established "divide-and-conquer" semiclassical method [M. Ceotto, G. Di Liberto, and R. Conte, Phys. Rev. Lett. 119, 010401 (2017)] and propose a new implementation of it to increase the accuracy of results. The technique permits us to perform spectroscopic calculations of high-dimensional systems by dividing the full-dimensional problem into a set of smaller dimensional ones. The partition procedure, originally based on a dynamical analysis of the Hessian matrix, is here more rigorously achieved through a hierarchical subspace-separation criterion based on Liouville's theorem. Comparisons of calculated vibrational frequencies to exact quantum ones for a set of molecules including benzene show that the new implementation performs better than the original one and that, on average, the loss in accuracy with respect to full-dimensional semiclassical calculations is reduced to only 10 wavenumbers. Furthermore, by investigating the challenging Zundel cation, we also demonstrate that the "divide-and-conquer" approach allows us to deal with complex strongly anharmonic molecular systems. Overall the method very much helps the assignment and physical interpretation of experimental IR spectra by providing accurate vibrational fundamentals and overtones decomposed into reduced dimensionality spectra.

  9. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  10. The first integral method to study the (2+1)-dimensional Jaulent ...

    Indian Academy of Sciences (India)

    In this paper, we have presented the applicability of the first integral method for constructing exact solutions of (2+1)-dimensional Jaulent–Miodek equations. The first integral method is a powerful and effective method for solving nonlinear partial differential equations which can be applied to nonintegrable as well as ...

  11. Three-Dimensional Velocity Obstacle Method for Uncoordinated Avoidance Maneuvers of Unmanned Aerial Vehicles

    NARCIS (Netherlands)

    Jenie, Y.I.; van Kampen, E.; de Visser, C.C.; Ellerbroek, J.; Hoekstra, J.M.

    2016-01-01

    This paper proposes a novel avoidance method called the three-dimensional velocity obstacle method. The method is designed for unmanned aerial vehicle applications, in particular to autonomously handle uncoordinated multiple encounters in an integrated airspace, by exploiting the limited space in a

  12. The first integral method to study the (2+1)-dimensional Jaulent ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we have presented the applicability of the first integral method for con- structing exact solutions of (2+1)-dimensional Jaulent–Miodek equations. The first integral method is a powerful and effective method for solving nonlinear partial differential equations which can be applied to nonintegrable as well ...

  13. A comparison of dimensionality reduction methods for retrieval of similar objects in simulation data

    Energy Technology Data Exchange (ETDEWEB)

    Cantu-Paz, E; Cheung, S S; Kamath, C

    2003-09-23

    High-resolution computer simulations produce large volumes of data. As a first step in the analysis of these data, supervised machine learning techniques can be used to retrieve objects similar to a query that the user finds interesting. These objects may be characterized by a large number of features, some of which may be redundant or irrelevant to the similarity retrieval problem. This paper presents a comparison of six dimensionality reduction algorithms on data from a fluid mixing simulation. The objective is to identify methods that efficiently find feature subsets that result in high accuracy rates. Our experimental results with single- and multi-resolution data suggest that standard forward feature selection produces the smallest feature subsets in the shortest time.

  14. Two-dimensional disruption thermal analysis code DREAM

    International Nuclear Information System (INIS)

    Yamazaki, Seiichiro; Kobayashi, Takeshi; Seki, Masahiro.

    1988-08-01

    When a plasma disruption takes place in a tokamak type fusion reactor, plasma facing components such as first wall and divertor/limiter are subjected to an intense heat load with very high heat flux and short duration. At the surface of the wall, temperature rapidly rises, and melting and evaporation occurs, it causes reduction of wall thickness and crack initiation/propagation. As lifetime of the components is significantly affected by them, the transient analysis in consideration of phase changes (melting/evaporation) and radiation heat loss is required in the design of these components. This paper describes the computer code DREAM developed to perform the two-dimensional transient thermal analysis that takes phase changes and radiation into account. The input and output of the code and a sample analysis on a disruption simulation experiment are also reported. The user's input manual is added as an appendix. The profiles and time variations of temperature, and melting and evaporated thicknesses of the material subjected to intense heat load can be obtained, using this computer code. This code also gives the temperature data for elastoplastic analysis with FEM structural analysis codes (ADINA, MARC, etc.) to evaluate the thermal stress and crack propagation behavior within the wall materials. (author)

  15. Novel Method of Detecting Movement of the Interference Fringes Using One-Dimensional PSD

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2015-06-01

    Full Text Available In this paper, a method of using a one-dimensional position-sensitive detector (PSD by replacing charge-coupled device (CCD to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe’s phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.

  16. Fast and reliable analysis of molecular motion using proximity relations and dimensionality reduction.

    Science.gov (United States)

    Plaku, Erion; Stamati, Hernán; Clementi, Cecilia; Kavraki, Lydia E

    2007-06-01

    The analysis of molecular motion starting from extensive sampling of molecular configurations remains an important and challenging task in computational biology. Existing methods require a significant amount of time to extract the most relevant motion information from such data sets. In this work, we provide a practical tool for molecular motion analysis. The proposed method builds upon the recent ScIMAP (Scalable Isomap) method, which, by using proximity relations and dimensionality reduction, has been shown to reliably extract from simulation data a few parameters that capture the main, linear and/or nonlinear, modes of motion of a molecular system. The results we present in the context of protein folding reveal that the proposed method characterizes the folding process essentially as well as ScIMAP. At the same time, by projecting the simulation data and computing proximity relations in a low-dimensional Euclidean space, it renders such analysis computationally practical. In many instances, the proposed method reduces the computational cost from several CPU months to just a few CPU hours, making it possible to analyze extensive simulation data in a matter of a few hours using only a single processor. These results establish the proposed method as a reliable and practical tool for analyzing motions of considerably large molecular systems and proteins with complex folding mechanisms. 2007 Wiley-Liss, Inc.

  17. Classification Methods for High-Dimensional Genetic Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 34, č. 1 (2014), s. 10-18 ISSN 0208-5216 Institutional support: RVO:67985807 Keywords : multivariate statistics * classification analysis * shrinkage estimation * dimension reduction * data mining Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.646, year: 2014

  18. Higher dimensional kernel methods | Osemwenkhae | Journal of the ...

    African Journals Online (AJOL)

    The multivariate kernel density estimator (MKDE) for the analysis of data in more than one dimension is presented. This removes the cumbersome nature associated with the interpretation of multivariate results when compared with most common multivariate schemes. The effect of varying the window width in MKDE with the ...

  19. Stability analysis of lower dimensional gravastars in noncommutative geometry

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Ayan [Jadavpur University, Department of Mathematics, Kolkata (India); Hansraj, Sudan [University of KwaZulu-Natal, Astrophysics and Cosmology Research Unit, School of Mathematics, Statistics and Computer Science, Durban (South Africa)

    2016-11-15

    The Banados et al. (Phys. Rev. Lett 69:1849, 1992), black hole solution is revamped from the Einstein field equations in (2 + 1)-dimensional anti-de Sitter spacetime, in a context of noncommutative geometry (Phys. Rev. D 87:084014, 2013). In this article, we explore the exact gravastar solutions in three-dimensional anti-de Sitter space given in the same geometry. As a first step we derive BTZ solution assuming the source of energy density as point-like structures in favor of smeared objects, where the particle mass M, is diffused throughout a region of linear size √(α) and is described by a Gaussian function of finite width rather than a Dirac delta function. We matched our interior solution to an exterior BTZ spacetime at a junction interface situated outside the event horizon. Furthermore, a stability analysis is carried out for the specific case when χ < 0.214 under radial perturbations about the static equilibrium solutions. To give theoretical support we are also trying to explore their physical properties and characteristics. (orig.)

  20. Dimensionality of the UWES-17: An item response modelling analysis

    Directory of Open Access Journals (Sweden)

    Deon P. de Bruin

    2013-10-01

    Research purpose: The main focus of this study was to use the Rasch model to provide insight into the dimensionality of the UWES-17, and to assess whether work engagement should be interpreted as one single overall score, three separate scores, or a combination. Motivation for the study: It is unclear whether a summative score is more representative of work engagement or whether scores are more meaningful when interpreted for each dimension separately. Previous work relied on confirmatory factor analysis; the potential of item response models has not been tapped. Research design: A quantitative cross-sectional survey design approach was used. Participants, 2429 employees of a South African Information and Communication Technology (ICT company, completed the UWES-17. Main findings: Findings indicate that work engagement should be treated as a unidimensional construct: individual scores should be interpreted in a summative manner, giving a single global score. Practical/managerial implications: Users of the UWES-17 may interpret a single, summative score for work engagement. Findings of this study should also contribute towards standardising UWES-17 scores, allowing meaningful comparisons to be made. Contribution/value-add: The findings will benefit researchers, organisational consultants and managers. Clarity on dimensionality and interpretation of work engagement will assist researchers in future studies. Managers and consultants will be able to make better-informed decisions when using work engagement data.

  1. Analysis of three-dimensional transient seepage into ditch drains ...

    Indian Academy of Sciences (India)

    Ratan Sarmah

    dimensional solutions to the problem are actually valid not for a field of finite size but for an infinite one only. Keywords. Analytical models; three-dimensional ponded ditch drainage; transient seepage; variable ponding; hydraulic conductivity ...

  2. A numerical method for two-dimensional anisotropic transport problem in cylindrical geometry

    International Nuclear Information System (INIS)

    Du Mingsheng; Feng Tiekai; Fu Lianxiang; Cao Changshu; Liu Yulan

    1988-01-01

    The authors deal with the triangular mesh-discontinuous finite element method for solving the time-dependent anisotropic neutron transport problem in two-dimensional cylindrical geometry. A prior estimate of the numerical solution is given. Stability is proved. The authors have computed a two dimensional anisotropic neutron transport problem and a Tungsten-Carbide critical assembly problem by using the numerical method. In comparision with DSN method and the experimental results obtained by others both at home and abroad, the method is satisfactory

  3. Solution of (3+1-Dimensional Nonlinear Cubic Schrodinger Equation by Differential Transform Method

    Directory of Open Access Journals (Sweden)

    Hassan A. Zedan

    2012-01-01

    Full Text Available Four-dimensional differential transform method has been introduced and fundamental theorems have been defined for the first time. Moreover, as an application of four-dimensional differential transform, exact solutions of nonlinear system of partial differential equations have been investigated. The results of the present method are compared very well with analytical solution of the system. Differential transform method can easily be applied to linear or nonlinear problems and reduces the size of computational work. With this method, exact solutions may be obtained without any need of cumbersome work, and it is a useful tool for analytical and numerical solutions.

  4. hp Spectral element methods for three dimensional elliptic problems ...

    Indian Academy of Sciences (India)

    exponential convergence. A method for obtaining a numerical solution to exponential accuracy for elliptic prob- lems on non-smooth domains in R2 was first .... The organization of this paper is as follows. In §2, we introduce the problem under.

  5. Generalized similarity method in unsteady two-dimensional MHD ...

    African Journals Online (AJOL)

    user

    increase lift or attempts to control the position of boundary layer separation point. For a long time following methods was used for boundary layer control: increasing the boundary layer velocity, boundary layer suction, second gas injection, profile laminarization, body cooling. Interest in effect of outer magnetic filed on ...

  6. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  7. A method for measuring three-dimensional mandibular kinematics in vivo using single-plane fluoroscopy

    Science.gov (United States)

    Chen, C-C; Lin, C-C; Chen, Y-J; Hong, S-W; Lu, T-W

    2013-01-01

    Objectives Accurate measurement of the three-dimensional (3D) motion of the mandible in vivo is essential for relevant clinical applications. Existing techniques are either of limited accuracy or require the use of transoral devices that interfere with jaw movements. This study aimed to develop further an existing method for measuring 3D, in vivo mandibular kinematics using single-plane fluoroscopy; to determine the accuracy of the method; and to demonstrate its clinical applicability via measurements on a healthy subject during opening/closing and chewing movements. Methods The proposed method was based on the registration of single-plane fluoroscopy images and 3D low-radiation cone beam CT data. It was validated using roentgen single-plane photogrammetric analysis at static positions and during opening/closing and chewing movements. Results The method was found to have measurement errors of 0.1 ± 0.9 mm for all translations and 0.2° ± 0.6° for all rotations in static conditions, and of 1.0 ± 1.4 mm for all translations and 0.2° ± 0.7° for all rotations in dynamic conditions. Conclusions The proposed method is considered an accurate method for quantifying the 3D mandibular motion in vivo. Without relying on transoral devices, the method has advantages over existing methods, especially in the assessment of patients with missing or unstable teeth, making it useful for the research and clinical assessment of the temporomandibular joint and chewing function. PMID:22842637

  8. [Parallel factor analysis as an analysis technique for the ratio of three-dimensional fluorescence peak in Taihu Lake].

    Science.gov (United States)

    Zhu, Peng; Liao, Hai-qing; Hua, Zu-lin; Xie, Fa-zhi; Tang, Zhi; Zhang, Liang

    2012-01-01

    The present paper proposes a new method to find the ratio of three-dimensional fluorescence peak. At first, the excitation-emission fluorescence matrix of water samples was treated with parallel factor analysis (PARAFAC) and then fluorescence peaks intensity and ratio of fluorescence peak were obtained from the parallel factor analysis model. From the parallel factor analysis model, the same fluorescence peaks of different water samples lie at the same excitation-emission wavelength and the overlap of different fluorescence peaks of the same water sample is reduced. Analysing regional characteristic in Taihu Lake, the ratio of factor score and the ratio of fluorescence peak showed strong correlation.

  9. Dimensional analysis to transform the differential equations in partial derivates in the theory of heat transmission into ordinary ones

    International Nuclear Information System (INIS)

    Diaz Sanchidrian, C.

    1989-01-01

    The present paper applies dimensional analysis with spatial discrimination to transform the differential equations in partial derivatives developed in the theory of heat transmission into ordinary ones. The effectivity of the method is comparable to that methods based in transformations of uni or multiparametric groups, with the advantage of being more direct and simple. (Author)

  10. Generalized Kudryashov method for solving some (3+1-dimensional nonlinear evolution equations

    Directory of Open Access Journals (Sweden)

    Md. Shafiqul Islam

    2015-06-01

    Full Text Available In this work, we have applied the generalized Kudryashov methods to obtain the exact travelling wave solutions for the (3+1-dimensional Jimbo-Miwa (JM equation, the (3+1-dimensional Kadomtsev-Petviashvili (KP equation and the (3+1-dimensional Zakharov-Kuznetsov (ZK. The attained solutions show distinct physical configurations. The constraints that will guarantee the existence of specific solutions will be investigated. These solutions may be useful and desirable for enlightening specific nonlinear physical phenomena in genuinely nonlinear dynamical systems.

  11. Seismic response of three-dimensional rockfill dams using the Indirect Boundary Element Method

    International Nuclear Information System (INIS)

    Sanchez-Sesma, Francisco J; Arellano-Guzman, Mauricio; Perez-Gavilan, Juan J; Suarez, Martha; Marengo-Mogollon, Humberto; Chaillat, Stephanie; Jaramillo, Juan Diego; Gomez, Juan; Iturraran-Viveros, Ursula; Rodriguez-Castellanos, Alejandro

    2010-01-01

    The Indirect Boundary Element Method (IBEM) is used to compute the seismic response of a three-dimensional rockfill dam model. The IBEM is based on a single layer integral representation of elastic fields in terms of the full-space Green function, or fundamental solution of the equations of dynamic elasticity, and the associated force densities along the boundaries. The method has been applied to simulate the ground motion in several configurations of surface geology. Moreover, the IBEM has been used as benchmark to test other procedures. We compute the seismic response of a three-dimensional rockfill dam model placed within a canyon that constitutes an irregularity on the surface of an elastic half-space. The rockfill is also assumed elastic with hysteretic damping to account for energy dissipation. Various types of incident waves are considered to analyze the physical characteristics of the response: symmetries, amplifications, impulse response and the like. Computations are performed in the frequency domain and lead to time response using Fourier analysis. In the present implementation a symmetrical model is used to test symmetries. The boundaries of each region are discretized into boundary elements whose size depends on the shortest wavelength, typically, six boundary segments per wavelength. Usually, the seismic response of rockfill dams is simulated using either finite elements (FEM) or finite differences (FDM). In most applications, commercial tools that combine features of these methods are used to assess the seismic response of the system for a given motion at the base of model. However, in order to consider realistic excitation of seismic waves with different incidence angles and azimuth we explore the IBEM.

  12. Automated three-dimensional X-ray analysis using a dual-beam FIB

    International Nuclear Information System (INIS)

    Schaffer, Miroslava; Wagner, Julian; Schaffer, Bernhard; Schmied, Mario; Mulders, Hans

    2007-01-01

    We present a fully automated method for three-dimensional (3D) elemental analysis demonstrated using a ceramic sample of chemistry (Ca)MgTiO x . The specimen is serially sectioned by a focused ion beam (FIB) microscope, and energy-dispersive X-ray spectrometry (EDXS) is used for elemental analysis of each cross-section created. A 3D elemental model is reconstructed from the stack of two-dimensional (2D) data. This work concentrates on issues arising from process automation, the large sample volume of approximately 17x17x10 μm 3 , and the insulating nature of the specimen. A new routine for post-acquisition data correction of different drift effects is demonstrated. Furthermore, it is shown that EDXS data may be erroneous for specimens containing voids, and that back-scattered electron images have to be used to correct for these errors

  13. Sparse kernel entropy component analysis for dimensionality reduction of neuroimaging data.

    Science.gov (United States)

    Jiang, Qikun; Shi, Jun

    2014-01-01

    The neuroimaging data typically has extremely high dimensions. Therefore, dimensionality reduction is commonly used to extract discriminative features. Kernel entropy component analysis (KECA) is a newly developed data transformation method, where the key idea is to preserve the most estimated Renyi entropy of the input space data set via a kernel-based estimator. Despite its good performance, KECA still suffers from the problem of low computational efficiency for large-scale data. In this paper, we proposed a sparse KECA (SKECA) algorithm with the recursive divide-and-conquer based solution, and then applied it to perform dimensionality reduction of neuroimaging data for classification of the Alzheimer's disease (AD). We compared the SKECA with KECA, principal component analysis (PCA), kernel PCA (KPCA) and sparse KPCA. The experimental results indicate that the proposed SKECA has most superior performance to all other algorithms when extracting discriminative features from neuroimaging data for AD classification.

  14. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  15. Automated three-dimensional analysis of particle measurements using an optical profilometer and image analysis software.

    Science.gov (United States)

    Bullman, V

    2003-07-01

    The automated collection of topographic images from an optical profilometer coupled with existing image analysis software offers the unique ability to quantify three-dimensional particle morphology. Optional software available with most optical profilers permits automated collection of adjacent topographic images of particles dispersed onto a suitable substrate. Particles are recognized in the image as a set of continuous pixels with grey-level values above the grey level assigned to the substrate, whereas particle height or thickness is represented in the numerical differences between these grey levels. These images are loaded into remote image analysis software where macros automate image processing, and then distinguish particles for feature analysis, including standard two-dimensional measurements (e.g. projected area, length, width, aspect ratios) and third-dimensional measurements (e.g. maximum height, mean height). Feature measurements from each calibrated image are automatically added to cumulative databases and exported to a commercial spreadsheet or statistical program for further data processing and presentation. An example is given that demonstrates the superiority of quantitative three-dimensional measurements by optical profilometry and image analysis in comparison with conventional two-dimensional measurements for the characterization of pharmaceutical powders with plate-like particles.

  16. Three-dimensional analysis of mandibular growth and tooth eruption

    DEFF Research Database (Denmark)

    Krarup, S.; Darvann, Tron Andre; Larsen, Per

    2005-01-01

    Normal and abnormal jaw growth and tooth eruption are topics of great importance for several dental and medical disciplines. Thus far, clinical studies on these topics have used two-dimensional (2D) radiographic techniques. The purpose of the present study was to analyse normal mandibular growth...... and tooth eruption in three dimensions based on computer tomography (CT) scans, extending the principles of mandibular growth analysis proposed by Bjork in 1969 from two to three dimensions. As longitudinal CT data from normal children are not available (for ethical reasons), CT data from children...... with Apert syndrome were employed, because it has been shown that the mandible in Apert syndrome is unaffected by the malformation, and these children often have several craniofacial CT scans performed during childhood for planning of cranial and midface surgery and for follow-up after surgery. A total of 49...

  17. Electromagnetic two-dimensional analysis of trapped-ion eigenmodes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D.; Rewoldt, G.

    1984-11-01

    A two-dimensional electromagnetic analysis of the trapped-ion instability for the tokamak case with ..beta.. not equal to 0 has been made, based on previous work in the electrostatic limit. The quasineutrality condition and the component of Ampere's law along the equilibrium magnetic field are solved for the perturbed electrostatic potential and the component of the perturbed vector potential along the equilibrium magnetic field. The general integro-differential equations are converted into a matrix eigenvalue-eigenfunction problem by expanding in cubic B-spline finite elements in the minor radius and in Fourier harmonics in the poloidal angle. A model MHD equilibrium with circular, concentric magnetic surfaces and large aspect ratio is used which is consistent with our assemption that B << 1. The effect on the trapped-ion mode of including these electromagnetic extensions to the calculation is considered, and the temperature (and ..beta..) scaling of the mode frequency is shown and discussed.

  18. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  19. The exploration machine: a novel method for analyzing high-dimensional data in computer-aided diagnosis

    Science.gov (United States)

    Wismüller, Axel

    2009-02-01

    Purpose: To develop, test, and evaluate a novel unsupervised machine learning method for computer-aided diagnosis and analysis of multidimensional data, such as biomedical imaging data. Methods: We introduce the Exploration Machine (XOM) as a method for computing low-dimensional representations of high-dimensional observations. XOM systematically inverts functional and structural components of topology-preserving mappings. By this trick, it can contribute to both structure-preserving visualization and data clustering. We applied XOM to the analysis of whole-genome microarray imaging data, comprising 2467 79-dimensional gene expression profiles of Saccharomyces cerevisiae, and to model-free analysis of functional brain MRI data by unsupervised clustering. For both applications, we performed quantitative comparisons to results obtained by established algorithms. Results: Genome data: Absolute (relative) Sammon error values were 5.91Â.105 (1.00) for XOM, 6.50Â.105 (1.10) for Sammon's mapping, 6.56Â.105 (1.11) for PCA, and 7.24Â.105 (1.22) for Self-Organizing Map (SOM). Computation times were 72, 216, 2, and 881 seconds for XOM, Sammon, PCA, and SOM, respectively. - Functional MRI data: Areas under ROC curves for detection of task-related brain activation were 0.984 +/- 0.03 for XOM, 0.983 +/- 0.02 for Minimal-Free-Energy VQ, and 0.979 +/- 0.02 for SOM. Conclusion: For both multidimensional imaging applications, i.e. gene expression visualization and functional MRI clustering, XOM yields competitive results when compared to established algorithms. Its surprising versatility to simultaneously contribute to dimensionality reduction and data clustering qualifies XOM to serve as a useful novel method for the analysis of multidimensional data, such as biomedical image data in computer-aided diagnosis.

  20. A Diminution Method of Large Multi-dimensional Data Retrievals

    Directory of Open Access Journals (Sweden)

    Nushwan Yousif Baithoon

    2010-01-01

    Full Text Available The intention of this work is to introduce a method ofcompressing data at the transmitter (source and expanding it atthe receiver (destination.The amount of data compression is directly related to datadimensionality, hence, for example an N by N RGB image file isconsidered to be an M-D, with M=3, image data file.Also, the amount of scatter in an M-D file, hence, the covariancematrix is calculated, along with the average value of eachdimension, to represent the signature or code for each individualdata set to be sent by the source.At the destination random sets can test a particular receivedsignature so that only one set is acceptable thus giving thecorresponding intended set to be received.Sound results are obtained depending on the constrains beingimplemented. These constrains are user tolerant in so far as howwell tuned or rapid the information is to be processed for dataretrieval.The proposed method is well suited in application areas whereboth source and destination are communicating using the samesets of data files at each end. Also such a technique is feasible forthe availability of fast microprocessors and frame-grabbers.

  1. Evaluation of motion compensation method for assessing the gastrointestinal motility using three dimensional endoscope

    Science.gov (United States)

    Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Fujinaga, Tetsuji; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2016-03-01

    Functional gastrointestinal disorders (FGID) are the most common gastrointestinal disorders. The term "functional" is generally applied to disorders where there are no structural abnormalities. Gastrointestinal dysmotility is one of the several mechanisms that have been proposed for the pathogenesis of FGID and is usually examined by manometry, a pressure test. There have been no attempts to examine the gastrointestinal dysmotility by endoscopy. We have proposed an imaging system for the assessment of gastric motility using a three-dimensional endoscope. After we newly developed a threedimensional endoscope and constructed a wave simulated model, we established a method of extracting three-dimensional contraction waves derived from a three-dimensional profile of the wave simulated model obtained with the endoscope. In the study, the endoscope and the wave simulated model were fixed to the ground. However, in a clinical setting, it is hard for endoscopists to keep the endoscope still. Moreover, stomach moves under the influence of breathing. Thus, three-dimensional registration of the position between the endoscope and the gastric wall is necessary for the accurate assessment of gastrointestinal motility. In this paper, we propose a motion compensation method using three-dimensional scene flow. The scene flow of the feature point calculated by obtained images in a time series enables the three-dimensional registration of the position between the endoscope and the gastric wall. We confirmed the validity of a proposed method first by a known-movement object and then by a wave simulated model.

  2. Three dimensional wavefield modeling using the pseudospectral method; Pseudospectral ho ni yoru sanjigen hadoba modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T.; Matsuoka, T. [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T. [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.

  3. Spectral (Finite) Volume Method for One Dimensional Euler Equations

    Science.gov (United States)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Consider a mesh of unstructured triangular cells. Each cell is called a Spectral Volume (SV), denoted by Si, which is further partitioned into subcells named Control Volumes (CVs), indicated by C(sub i,j). To represent the solution as a polynomial of degree m in two dimensions (2D) we need N = (m+1)(m+2)/2 pieces of independent information, or degrees of freedom (DOFs). The DOFs in a SV method are the volume-averaged mean variables at the N CVs. For example, to build a quadratic reconstruction in 2D, we need at least (2+1)(3+1)/2 = 6 DOFs. There are numerous ways of partitioning a SV, and not every partition is admissible in the sense that the partition may not be capable of producing a degree m polynomial. Once N mean solutions in the CVs of a SV are given, a unique polynomial reconstruction can be obtained.

  4. Method and apparatus for two-dimensional spectroscopy

    Science.gov (United States)

    DeCamp, Matthew F.; Tokmakoff, Andrei

    2010-10-12

    Preferred embodiments of the invention provide for methods and systems of 2D spectroscopy using ultrafast, first light and second light beams and a CCD array detector. A cylindrically-focused second light beam interrogates a target that is optically interactive with a frequency-dispersed excitation (first light) pulse, whereupon the second light beam is frequency-dispersed at right angle orientation to its line of focus, so that the horizontal dimension encodes the spatial location of the second light pulse and the first light frequency, while the vertical dimension encodes the second light frequency. Differential spectra of the first and second light pulses result in a 2D frequency-frequency surface equivalent to double-resonance spectroscopy. Because the first light frequency is spatially encoded in the sample, an entire surface can be acquired in a single interaction of the first and second light pulses.

  5. Approximation of the Frame Coefficients using Finite Dimensional Methods

    DEFF Research Database (Denmark)

    Christensen, Ole; Casazza, P.

    1997-01-01

    A frame is a family $\\{f_i \\}_{i=1}^{\\infty}$ of elements in aHilbert space $\\cal H $with the property that every element in $\\cal H $ can be written as a(infinite) linear combination of the frame elements. Frame theorydescribes how one can choose the corresponding coefficients, which arecalled......_i \\}_{i=1}^{n}$ of the frame and theorthogonal projection $P_n$ onto its span. For $f \\in \\h ,P_nf$ has a representation as a linear combination of $f_i , i=1,2,..n,$and the corresponding coefficients can be calculated using finite dimensionalmethods. We find conditions implying that those coefficients...... frame coefficients. From the mathematical point of view this is gratifying, but for applications it is a problem that the calculationrequires inversion of an operator on $\\cal H $. \\The projection method is introduced in order to avoid thisproblem. The basic idea is toconsider finite subfamilies $\\{f...

  6. On High Dimensional Searching Spaces and Learning Methods

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2017-01-01

    for the upper and lower boundaries for data objects with respect to each cluster. BFPM facilitates algorithms to converge and also inherits the abilities of conventional fuzzy and possibilistic methods. In Big Data applications knowing the exact type of data objects and selecting the most accurate similarity [1......In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. In this chapter we describe different data types , membership functions...... , and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...

  7. Diagnostic performance and safety of a three-dimensional 14-core systematic biopsy method.

    Science.gov (United States)

    Takeshita, Hideki; Kawakami, Satoru; Numao, Noboru; Sakura, Mizuaki; Tatokoro, Manabu; Yamamoto, Shinya; Kijima, Toshiki; Komai, Yoshinobu; Saito, Kazutaka; Koga, Fumitaka; Fujii, Yasuhisa; Fukui, Iwao; Kihara, Kazunori

    2015-03-01

    To investigate the diagnostic performance and safety of a three-dimensional 14-core biopsy (3D14PBx) method, which is a combination of the transrectal six-core and transperineal eight-core biopsy methods. Between December 2005 and August 2010, 1103 men underwent 3D14PBx at our institutions and were analysed prospectively. Biopsy criteria included a PSA level of 2.5-20 ng/mL or abnormal digital rectal examination (DRE) findings, or both. The primary endpoint of the study was diagnostic performance and the secondary endpoint was safety. We applied recursive partitioning to the entire study cohort to delineate the unique contribution of each sampling site to overall and clinically significant cancer detection. Prostate cancer was detected in 503 of the 1103 patients (45.6%). Age, family history of prostate cancer, DRE, PSA, percentage of free PSA and prostate volume were associated with the positive biopsy results significantly and independently. Of the 503 cancers detected, 39 (7.8%) were clinically locally advanced (≥cT3a), 348 (69%) had a biopsy Gleason score (GS) of ≥7, and 463 (92%) met the definition of biopsy-based significant cancer. Recursive partitioning analysis showed that each sampling site contributed uniquely to both the overall and the biopsy-based significant cancer detection rate of the 3D14PBx method. The overall cancer-positive rate of each sampling site ranged from 14.5% in the transrectal far lateral base to 22.8% in the transrectal far lateral apex. As of August 2010, 210 patients (42%) had undergone radical prostatectomy, of whom 55 (26%) were found to have pathologically non-organ-confined disease, 174 (83%) had prostatectomy GS ≥7 and 185 (88%) met the definition of prostatectomy-based significant cancer. This is the first prospective analysis of the diagnostic performance of an extended biopsy method, which is a simplified version of the somewhat redundant super-extended three-dimensional 26-core biopsy. As expected, each sampling

  8. Statistical Analysis to Develop a Three-Dimensional Surface Model of a Midsize-Male Foot

    Science.gov (United States)

    2013-10-31

    Anthropometry , Posture, Vehicle Occupants, Statistical Shape Analysis, Safety 16...Statement  A.  Approved  for  Public  Release    4   INTRODUCTION   Three-­‐dimensional   anthropometry  has  been  widely...analysis  methods.  A  sample  of  foot  scans  was  drawn  from  a  much   larger  study  of  soldier   anthropometry

  9. One-dimensional analysis of filamentary composite beam columns with thin-walled open sections

    Science.gov (United States)

    Lo, Patrick K.-L.; Johnson, Eric R.

    1986-01-01

    Vlasov's one-dimensional structural theory for thin-walled open section bars was originally developed and used for metallic elements. The theory was recently extended to laminated bars fabricated from advanced composite materials. The purpose of this research is to provide a study and assessment of the extended theory. The focus is on flexural and torsional-flexural buckling of thin-walled, open section, laminated composite columns. Buckling loads are computed from the theory using a linear bifurcation analysis and a geometrically nonlinear beam column analysis by the finite element method. Results from the analyses are compared to available test data.

  10. A finite-dimensional reduction method for slightly supercritical elliptic problems

    Directory of Open Access Journals (Sweden)

    Riccardo Molle

    2004-01-01

    Full Text Available We describe a finite-dimensional reduction method to find solutions for a class of slightly supercritical elliptic problems. A suitable truncation argument allows us to work in the usual Sobolev space even in the presence of supercritical nonlinearities: we modify the supercritical term in such a way to have subcritical approximating problems; for these problems, the finite-dimensional reduction can be obtained applying the methods already developed in the subcritical case; finally, we show that, if the truncation is realized at a sufficiently large level, then the solutions of the approximating problems, given by these methods, also solve the supercritical problems when the parameter is small enough.

  11. Three-dimensional analysis of a postbuckled embedded delamination

    Science.gov (United States)

    Whitcomb, John D.

    1989-01-01

    Delamination growth caused by local buckling of a delaminated group of plies was investigated. Delamination growth was assumed to be governed by the strain energy release rates, G(1), G(2) and G(3). The strain energy release rates were calculated using a geometrically nonlinear three-dimensional finite element analysis. The program is described and several checks of the analysis are discussed. Based on a limited parametric study, the following conclusions were reached: (1) the problem is definitely mixed mode (in some cases G(1) is larger than G(2), for other cases the opposite is true); (2) in general, there is a large gradient in the strain energy release rates along the delamination front; (3) the locations of maximum G(1) and G(2) depend on the delamination shape and the applied strain; (4) the mode 3 component was negligible for all cases considered; and (5) the analysis predicted that parts of the delamination would overlap. The results presented did not impose contact constraints to prevent overlapping. Further work is needed to determine the effects of allowing the overlapping.

  12. Three Dimensional CFD Analysis of the GTX Combustor

    Science.gov (United States)

    Steffen, C. J., Jr.; Bond, R. B.; Edwards, J. R.

    2002-01-01

    The annular combustor geometry of a combined-cycle engine has been analyzed with three-dimensional computational fluid dynamics. Both subsonic combustion and supersonic combustion flowfields have been simulated. The subsonic combustion analysis was executed in conjunction with a direct-connect test rig. Two cold-flow and one hot-flow results are presented. The simulations compare favorably with the test data for the two cold flow calculations; the hot-flow data was not yet available. The hot-flow simulation Indicates that the conventional ejector-ramjet cycle would not provide adequate mixing at the conditions tested. The supersonic combustion ramjet flowfield was simulated with frozen chemistry model. A five-parameter test matrix was specified, according to statistical design-of-experiments theory. Twenty-seven separate simulations were used to assemble surrogate models for combustor mixing efficiency and total pressure recovery. Scramjet injector design parameters (injector angle, location, and fuel split) as well as mission variables (total fuel mass flow and freestream Mach number) were included in the analysis. A promising injector design has been identified that provides good mixing characteristics with low total pressure losses. The surrogate models can be used to develop performance maps of different injector designs. Several complex three-way variable interactions appear within the dataset that are not adequately resolved with the current statistical analysis.

  13. Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods.

    Science.gov (United States)

    Harandi, Mehrtash; Salzmann, Mathieu; Hartley, Richard

    2018-01-01

    Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices -especially of high-dimensional ones- comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.

  14. A method for automatic feature points extraction of human vertebrae three-dimensional model

    Science.gov (United States)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  15. Two Dimensional Steady State Eddy Current Analysis of a Spinning Conducting Cylinder

    Science.gov (United States)

    2017-03-09

    UNCLASSIFIED UNCLASSIFIED AD-E403 855 Technical Report ARMET-TR-16045 TWO- DIMENSIONAL STEADY-STATE EDDY CURRENT ANALYSIS OF A...August 2014 4. TITLE AND SUBTITLE TWO- DIMENSIONAL STEADY-STATE EDDY CURRENT ANALYSIS OF A SPINNING CONDUCTING CYLINDER 5a. CONTRACT NUMBER 5b...analytical closed-form analysis performed by Michael P. Perry and Thomas B. Jones (ref. 4). Two- dimensional (2D) finite element steady state analyses

  16. One-dimensional Galerkin methods and superconvergence at interior nodal points

    NARCIS (Netherlands)

    M. Bakker (Miente)

    1984-01-01

    htmlabstractIn the case of one-dimensional Galerkin methods the phenomenon of superconvergence at the knots has been known for years [5], [7]. In this paper, a minor kind of superconvergence at specific points inside the segments of the partition is discussed for two classes of Galerkin methods: the

  17. Exploratory evaluation of two-dimensional and three-dimensional methods of FDG PET quantification in pediatric anaplastic astrocytoma: a report from the Pediatric Brain Tumor Consortium (PBTC)

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Gethin; Treves, S. Ted [Harvard Medical School, Joint Program in Nuclear Medicine, Children' s Hospital Boston, Boston, MA (United States); Fahey, Frederic H. [Harvard Medical School, Joint Program in Nuclear Medicine, Children' s Hospital Boston, Boston, MA (United States); Harvard Medical School, Division of Nuclear Medicine, Department of Radiology, Children' s Hospital Boston, Boston, MA (United States); Kocak, Mehmet; Boyett, James M.; Kun, Larry E. [St Jude Children' s Research Hospital, Memphis, TN (United States); Pollack, Ian F. [Children' s Hospital Pittsburgh, Pittsburgh, PA (United States); Young Poussaint, Tina [Harvard Medical School, Division of Neuroradiology, Department of Radiology, Children' s Hospital Boston, Boston, MA (United States)

    2008-09-15

    The rationale of this study was to investigate the feasibility of three-dimensional (3D) methods to analyze {sup 18}F-fluoro-deoxy-glucose (FDG) uptake in children with anaplastic astrocytoma (AA) in a multi-institutional trial, to compare 3D and two-dimensional (2D) methods and explore data associations with progression-free survival (PFS). 3D tumor volumes from pretreatment MR images (fluid attenuation inversion recovery and postgadolinium) of children with recurrent AA on a phase I trial of imatinib mesylate were coregistered to FDG positron emission tomography (PET) images. PET data were normalized. Four metrics were defined: the maximum ratio (maximum pixel value within the 3D tumor volume, normalized), the total ratio (cumulative pixel values within the tumor volume, normalized) and tumor mean ratio (total pixel value divided by volume, normalized). 2D analysis methods were compared. Cox proportional hazards models were used to estimate the association between these methods and PFS. Strongest correlations between 2D and 3D methods were with analyses using postcontrast T1 images for volume of interest (VOI). The analyses suggest 3D maximum tumor and mean tumor ratios, whether normalized by gray matter or white matter, were associated with PFS. This study of a series of pretreatment AA patients suggests that 3D PET methods using VOIs based on postcontrast T1 correlate with 2D methods and are related to PFS. These methods yield an estimate of metabolically active tumor burden and may add prognostic information after tumor grade is determined. Future rigorous multi-institutional protocols with larger numbers of patients will be required for validation. (orig.)

  18. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    Science.gov (United States)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  19. A simplified two-dimensional boundary element method with arbitrary uniform mean flow

    Directory of Open Access Journals (Sweden)

    Bassem Barhoumi

    2017-07-01

    Full Text Available To reduce computational costs, an improved form of the frequency domain boundary element method (BEM is proposed for two-dimensional radiation and propagation acoustic problems in a subsonic uniform flow with arbitrary orientation. The boundary integral equation (BIE representation solves the two-dimensional convected Helmholtz equation (CHE and its fundamental solution, which must satisfy a new Sommerfeld radiation condition (SRC in the physical space. In order to facilitate conventional formulations, the variables of the advanced form are expressed only in terms of the acoustic pressure as well as its normal and tangential derivatives, and their multiplication operators are based on the convected Green’s kernel and its modified derivative. The proposed approach significantly reduces the CPU times of classical computational codes for modeling acoustic domains with arbitrary mean flow. It is validated by a comparison with the analytical solutions for the sound radiation problems of monopole, dipole and quadrupole sources in the presence of a subsonic uniform flow with arbitrary orientation. Keywords: Two-dimensional convected Helmholtz equation, Two-dimensional convected Green’s function, Two-dimensional convected boundary element method, Arbitrary uniform mean flow, Two-dimensional acoustic sources

  20. Two-dimensional Imaging Velocity Interferometry: Technique and Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Erskine, D J; Smith, R F; Bolme, C; Celliers, P; Collins, G

    2011-03-23

    We describe the data analysis procedures for an emerging interferometric technique for measuring motion across a two-dimensional image at a moment in time, i.e. a snapshot 2d-VISAR. Velocity interferometers (VISAR) measuring target motion to high precision have been an important diagnostic in shockwave physics for many years Until recently, this diagnostic has been limited to measuring motion at points or lines across a target. We introduce an emerging interferometric technique for measuring motion across a two-dimensional image, which could be called a snapshot 2d-VISAR. If a sufficiently fast movie camera technology existed, it could be placed behind a traditional VISAR optical system and record a 2d image vs time. But since that technology is not yet available, we use a CCD detector to record a single 2d image, with the pulsed nature of the illumination providing the time resolution. Consequently, since we are using pulsed illumination having a coherence length shorter than the VISAR interferometer delay ({approx}0.1 ns), we must use the white light velocimetry configuration to produce fringes with significant visibility. In this scheme, two interferometers (illuminating, detecting) having nearly identical delays are used in series, with one before the target and one after. This produces fringes with at most 50% visibility, but otherwise has the same fringe shift per target motion of a traditional VISAR. The 2d-VISAR observes a new world of information about shock behavior not readily accessible by traditional point or 1d-VISARS, simultaneously providing both a velocity map and an 'ordinary' snapshot photograph of the target. The 2d-VISAR has been used to observe nonuniformities in NIF related targets (polycrystalline diamond, Be), and in Si and Al.

  1. Hydrogeophysical exploration of three-dimensional salinity anomalies with the time-domain electromagnetic method (TDEM)

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Gondwe, Bibi Ruth Neuman; Christiansen, Lars

    2010-01-01

    Delta is presented. Evaporative salt enrichment causes a strong salinity anomaly under the island. We show that the TDEM field data cannot be interpreted in terms of standard one-dimensional layered-earth TDEM models, because of the strongly three-dimensional nature of the salinity anomaly. Three......The time-domain electromagnetic method (TDEM) is widely used in groundwater exploration and geological mapping applications. TDEM measures subsurface electrical conductivity, which is strongly correlated with groundwater salinity. TDEM offers a cheap and non-invasive option for mapping saltwater...... intrusion and groundwater salinization. Traditionally, TDEM data is interpreted using one-dimensional layered-earth models of the subsurface. However, most saltwater intrusion and groundwater salinization phenomena are characterized by three-dimensional anomalies. To fully exploit the information content...

  2. Accelerated algorithm for three-dimensional computer generated hologram based on the ray-tracing method

    Science.gov (United States)

    Xie, Z. W.; Zang, J. L.; Zhang, Y.

    2013-06-01

    An accelerated algorithm for three-dimensional computer generated holograms (CGHs) based on the ray-tracing method is proposed. The complex amplitude distribution from the center point of an object is calculated in advance and the field distributions of rest points on the hologram plane can be given by doing a small translation and an aberration to the pre-calculated field. A static two-dimensional car, a three-dimensional teapot, and a dynamic three-dimensional rotating teapot are reconstructed from CGHs calculated with the accelerated algorithm to prove its validity. The simulation results demonstrate that the accelerated algorithm is eight times faster than the conventional ray-tracing algorithm.

  3. Coarse Analysis of Microscopic Models using Equation-Free Methods

    DEFF Research Database (Denmark)

    Marschler, Christian

    -dimensional models. The goal of this thesis is to investigate such high-dimensional multiscale models and extract relevant low-dimensional information from them. Recently developed mathematical tools allow to reach this goal: a combination of so-called equation-free methods with numerical bifurcation analysis...... using short simulation bursts of computationally-expensive complex models. Those information is subsequently used to construct bifurcation diagrams that show the parameter dependence of solutions of the system. The methods developed for this thesis have been applied to a wide range of relevant problems....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...

  4. Categorical Analysis of Human T Cell Heterogeneity with One-Dimensional Soli-Expression by Nonlinear Stochastic Embedding.

    Science.gov (United States)

    Cheng, Yang; Wong, Michael T; van der Maaten, Laurens; Newell, Evan W

    2016-01-15

    Rapid progress in single-cell analysis methods allow for exploration of cellular diversity at unprecedented depth and throughput. Visualizing and understanding these large, high-dimensional datasets poses a major analytical challenge. Mass cytometry allows for simultaneous measurement of >40 different proteins, permitting in-depth analysis of multiple aspects of cellular diversity. In this article, we present one-dimensional soli-expression by nonlinear stochastic embedding (One-SENSE), a dimensionality reduction method based on the t-distributed stochastic neighbor embedding (t-SNE) algorithm, for categorical analysis of mass cytometry data. With One-SENSE, measured parameters are grouped into predefined categories, and cells are projected onto a space composed of one dimension for each category. In contrast with higher-dimensional t-SNE, each dimension (plot axis) in One-SENSE has biological meaning that can be easily annotated with binned heat plots. We applied One-SENSE to probe relationships between categories of human T cell phenotypes and observed previously unappreciated cellular populations within an orchestrated view of immune cell diversity. The presentation of high-dimensional cytometric data using One-SENSE showed a significant improvement in distinguished T cell diversity compared with the original t-SNE algorithm and could be useful for any high-dimensional dataset. Copyright © 2016 by The American Association of Immunologists, Inc.

  5. Hydrogeophysical exploration of three-dimensional salinity anomalies with the time-domain electromagnetic method (TDEM)

    Science.gov (United States)

    Bauer-Gottwein, Peter; Gondwe, Bibi N.; Christiansen, Lars; Herckenrath, Daan; Kgotlhang, Lesego; Zimmermann, Stephanie

    2010-01-01

    SummaryThe time-domain electromagnetic method (TDEM) is widely used in groundwater exploration and geological mapping applications. TDEM measures subsurface electrical conductivity, which is strongly correlated with groundwater salinity. TDEM offers a cheap and non-invasive option for mapping saltwater intrusion and groundwater salinization. Traditionally, TDEM data is interpreted using one-dimensional layered-earth models of the subsurface. However, most saltwater intrusion and groundwater salinization phenomena are characterized by three-dimensional anomalies. To fully exploit the information content of TDEM data in this context, three-dimensional modeling of the TDEM response is required. We present a finite-element solution for three-dimensional forward modeling of TDEM responses from arbitrary subsurface electrical conductivity distributions. The solution is benchmarked against standard layered-earth models and previously published three-dimensional forward TDEM modeling results. Concentration outputs from a groundwater flow and salinity transport model are converted to subsurface electrical conductivity using standard petrophysical relationships. TDEM responses over the resulting subsurface electrical conductivity distribution are generated using the three-dimensional TDEM forward model. The parameters of the hydrodynamic model are constrained by matching observed and simulated TDEM responses. As an application example, a field dataset of ground-based TDEM data from an island in the Okavango Delta is presented. Evaporative salt enrichment causes a strong salinity anomaly under the island. We show that the TDEM field data cannot be interpreted in terms of standard one-dimensional layered-earth TDEM models, because of the strongly three-dimensional nature of the salinity anomaly. Three-dimensional interpretation of the field data allows for detailed and consistent mapping of this anomaly and makes better use of the information contained in the TDEM field

  6. A New Method for Reducing Dimensional Variability of Extruded Hollow Sections

    Science.gov (United States)

    Baringbing, Henry Ako; Welo, Torgeir; Søvik, Odd Perry

    2007-05-01

    Crash boxes are one recent application example of aluminum extrusions in the automotive industry. A crash box is typically made by welding an extruded tube (tower) to a foot plate at one end, providing the mounting features towards the rail tip of the vehicle. When using fully automated welding processes, the exterior dimensions of the tower have to be within a tolerance of typically +/- 0.25 mm in order to provide consistent weld properties. However, the extrusion process commonly introduces dimensional variations exceeding those required for good weld quality. In order to avoid costly hydro-forming processes, a new mechanical calibration process has been developed. This method represents a means to achieve sufficient dimensional accuracy of the crash box tower prior to welding. A prototype die was made to validate the calibration process using alloy AA6063 T4 extrusions. Tensile tests were performed in order to determine material parameters. The geometry of each tower was carefully measured before and after forming to determine the dimensional capability of the calibration process. Statistical methods were combined with FEA simulations and analytical methods to establish surrogate models and response surfaces. The results show that the calibration process is an effective method for improving the dimensional accuracy of crash box profiles, providing significant improvements in dimensional capability. It is concluded that the methodology has a high industrial potential.

  7. Multi dimensional analysis of Design Basis Events using MARS-LMR

    International Nuclear Information System (INIS)

    Woo, Seung Min; Chang, Soon Heung

    2012-01-01

    Highlights: ► The one dimensional analyzed sodium hot pool is modified to a three dimensional node system, because the one dimensional analysis cannot represent the phenomena of the inside pool of a big size pool with many compositions. ► The results of the multi-dimensional analysis compared with the one dimensional analysis results in normal operation, TOP (Transient of Over Power), LOF (Loss of Flow), and LOHS (Loss of Heat Sink) conditions. ► The difference of the sodium flow pattern due to structure effect in the hot pool and mass flow rates in the core lead the different sodium temperature and temperature history under transient condition. - Abstract: KALIMER-600 (Korea Advanced Liquid Metal Reactor), which is a pool type SFR (Sodium-cooled Fast Reactor), was developed by KAERI (Korea Atomic Energy Research Institute). DBE (Design Basis Events) for KALIMER-600 has been analyzed in the one dimension. In this study, the one dimensional analyzed sodium hot pool is modified to a three dimensional node system, because the one dimensional analysis cannot represent the phenomena of the inside pool of a big size pool with many compositions, such as UIS (Upper Internal Structure), IHX (Intermediate Heat eXchanger), DHX (Decay Heat eXchanger), and pump. The results of the multi-dimensional analysis compared with the one dimensional analysis results in normal operation, TOP (Transient of Over Power), LOF (Loss of Flow), and LOHS (Loss of Heat Sink) conditions. First, the results in normal operation condition show the good agreement between the one and multi-dimensional analysis. However, according to the sodium temperatures of the core inlet, outlet, the fuel central line, cladding and PDRC (Passive Decay heat Removal Circuit), the temperatures of the one dimensional analysis are generally higher than the multi-dimensional analysis in conditions except the normal operation state, and the PDRC operation time in the one dimensional analysis is generally longer than

  8. X-ray computed tomography of packed bed chromatography columns for three dimensional imaging and analysis.

    Science.gov (United States)

    Johnson, T F; Levison, P R; Shearing, P R; Bracewell, D G

    2017-03-03

    Physical characteristics critical to chromatography including geometric porosity and tortuosity within the packed column were analysed based upon three dimensional reconstructions of bed structure in-situ. Image acquisition was performed using two X-ray computed tomography systems, with optimisation of column imaging performed for each sample in order to produce three dimensional representations of packed beds at 3μm resolution. Two bead materials, cellulose and ceramic, were studied using the same optimisation strategy but resulted in differing parameters required for X-ray computed tomography image generation. After image reconstruction and processing into a digital three dimensional format, physical characteristics of each packed bed were analysed, including geometric porosity, tortuosity, surface area to volume ratio as well as inter-bead void diameters. Average porosities of 34.0% and 36.1% were found for ceramic and cellulose samples and average tortuosity readings at 1.40 and 1.79 respectively, with greater porosity and reduced tortuosity overall values at the centre compared to the column edges found in each case. X-ray computed tomography is demonstrated to be a viable method for three dimensional imaging of packed bed chromatography systems, enabling geometry based analysis of column axial and radial heterogeneity that is not feasible using traditional techniques for packing quality which provide an ensemble measure. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  9. Three-dimensional analysis and computer modeling of the capillary endothelial vesicular system with electron tomography.

    Science.gov (United States)

    Wagner, Roger; Modla, Shannon; Hossler, Fred; Czymmek, Kirk

    2012-08-01

    We examined the three-dimensional organization of the endothelial vesicular system with TEM tomography of semi-thick sections. Mouse abdominal muscle capillaries were perfused with terbium to label vesicular compartments open to the luminal surface. The tissue was prepared for TEM and semi-thick (250 nm) sections were cut. Dual axis tilt series, collected from +60° to -60° at 1° increments, were acquired in regions of labeled abluminal caveolae. These tomograms were reconstructed and analyzed to reveal three-dimensional vesicular associations not evident in thin sections. Reconstructed tomograms revealed free vesicles, both labeled and unlabeled, in the endothelial cytoplasm as well as transendothelial channels that spanned the luminal and abluminal membranes. A large membranous compartment connecting the luminal and abluminal surfaces was also present. Computer modeling of tomographic data and video animations provided three-dimensional perspectives to these structures. Uncertainties associated with other three-dimensional methods to study the capillary wall are remedied by tomographic analysis of semi-thick sections. Transendothelial channels of fused vesicles and free cytoplasmic vesicles give credence to their role as large pores in the transport of solutes across the walls of continuous capillaries. © 2012 John Wiley & Sons Ltd.

  10. A new method for three-dimensional laparoscopic ultrasound model reconstruction

    DEFF Research Database (Denmark)

    Fristrup, C W; Pless, T; Durup, J

    2004-01-01

    was to perform a volumetric test and a clinical feasibility test of a new 3D method using standard laparoscopic ultrasound equipment. METHODS: Three-dimensional models were reconstructed from a series of two-dimensional ultrasound images using either electromagnetic tracking or a new 3D method. The volumetric...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...... accurate models comparable to findings at surgery and pathology. CONCLUSIONS: The use of the new 3D method is technically feasible, and its volumetrically, accurate compared to 3D with electromagnetic tracking....

  11. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  12. Stability analysis of two-dimensional digital recursive filters

    Science.gov (United States)

    Alexander, W. E.; Pruess, S. A.

    1980-01-01

    A new approach to the stability problem for the two-dimensional digital recursive filter is presented. The bivariate difference equation representation of the two-dimensional recursive digital filter is converted to a multiinput-multioutput (MIMO) system similar to the state-space representation of the one-dimensional digital recursive filter. In this paper, a pseudo-state representation is used and three coefficient matrices are obtained. A general theorem for stability of two-dimensional digital recursive filters is derived and a very useful theorem is presented which expresses sufficient requirements for instability in terms of the spectral radii of these matrices.

  13. Threshold-free method for three-dimensional segmentation of organelles

    Science.gov (United States)

    Chan, Yee-Hung M.; Marshall, Wallace F.

    2012-03-01

    An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.

  14. Statistical methods for bioimpedance analysis

    Directory of Open Access Journals (Sweden)

    Christian Tronstad

    2014-04-01

    Full Text Available This paper gives a basic overview of relevant statistical methods for the analysis of bioimpedance measurements, with an aim to answer questions such as: How do I begin with planning an experiment? How many measurements do I need to take? How do I deal with large amounts of frequency sweep data? Which statistical test should I use, and how do I validate my results? Beginning with the hypothesis and the research design, the methodological framework for making inferences based on measurements and statistical analysis is explained. This is followed by a brief discussion on correlated measurements and data reduction before an overview is given of statistical methods for comparison of groups, factor analysis, association, regression and prediction, explained in the context of bioimpedance research. The last chapter is dedicated to the validation of a new method by different measures of performance. A flowchart is presented for selection of statistical method, and a table is given for an overview of the most important terms of performance when evaluating new measurement technology.

  15. Assessment of Thorium Analysis Methods

    International Nuclear Information System (INIS)

    Putra, Sugili

    1994-01-01

    The Assessment of thorium analytical methods for mixture power fuel consisting of titrimetry, X-ray flouresence spectrometry, UV-VIS spectrometry, alpha spectrometry, emission spectrography, polarography, chromatography (HPLC) and neutron activation were carried out. It can be concluded that analytical methods which have high accuracy (deviation standard < 3%) were; titrimetry neutron activation analysis and UV-VIS spectrometry; whereas with low accuracy method (deviation standard 3-10%) were; alpha spectrometry and emission spectrography. Ore samples can be analyzed by X-ray flourescnce spectrometry, neutron activation analysis, UV-VIS spectrometry, emission spectrography, chromatography and alpha spectometry. Concentrated samples can be analyzed by X-ray flourescence spectrometry; simulation samples can be analyzed by titrimetry, polarography and UV-VIS spectrometry, and samples of thorium as minor constituent can be analyzed by neutron activation analysis and alpha spectrometry. Thorium purity (impurities element in thorium samples) can be analyzed by emission spectography. Considering interference aspects, in general analytical methods without molecule reaction are better than those involving molecule reactions (author). 19 refs., 1 tabs

  16. A discontinuous Galerkin method for two-dimensional PDE models of Asian options

    Science.gov (United States)

    Hozman, J.; Tichý, T.; Cvejnová, D.

    2016-06-01

    In our previous research we have focused on the problem of plain vanilla option valuation using discontinuous Galerkin method for numerical PDE solution. Here we extend a simple one-dimensional problem into two-dimensional one and design a scheme for valuation of Asian options, i.e. options with payoff depending on the average of prices collected over prespecified horizon. The algorithm is based on the approach combining the advantages of the finite element methods together with the piecewise polynomial generally discontinuous approximations. Finally, an illustrative example using DAX option market data is provided.

  17. Method of photon spectral analysis

    Science.gov (United States)

    Gehrke, Robert J.; Putnam, Marie H.; Killian, E. Wayne; Helmer, Richard G.; Kynaston, Ronnie L.; Goodwin, Scott G.; Johnson, Larry O.

    1993-01-01

    A spectroscopic method to rapidly measure the presence of plutonium in soils, filters, smears, and glass waste forms by measuring the uranium L-shell x-ray emissions associated with the decay of plutonium. In addition, the technique can simultaneously acquire spectra of samples and automatically analyze them for the amount of americium and .gamma.-ray emitting activation and fission products present. The samples are counted with a large area, thin-window, n-type germanium spectrometer which is equally efficient for the detection of low-energy x-rays (10-2000 keV), as well as high-energy .gamma. rays (>1 MeV). A 8192- or 16,384 channel analyzer is used to acquire the entire photon spectrum at one time. A dual-energy, time-tagged pulser, that is injected into the test input of the preamplifier to monitor the energy scale, and detector resolution. The L x-ray portion of each spectrum is analyzed by a linear-least-squares spectral fitting technique. The .gamma.-ray portion of each spectrum is analyzed by a standard Ge .gamma.-ray analysis program. This method can be applied to any analysis involving x- and .gamma.-ray analysis in one spectrum and is especially useful when interferences in the x-ray region can be identified from the .gamma.-ray analysis and accommodated during the x-ray analysis.

  18. Robust Adaptive Lasso method for parameter’s estimation and variable selection in high-dimensional sparse models

    Science.gov (United States)

    Wahid, Abdul; Hussain, Ijaz

    2017-01-01

    High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL) method is proposed which is based on pearson residuals weighting scheme. The weight function determines the compatibility of each observations and downweight it if they are inconsistent with the assumed model. It is observed that RAL estimator can correctly select the covariates with non-zero coefficients and can estimate parameters, simultaneously, not only in the presence of influential observations, but also in the presence of high multicolliearity. We also discuss the model selection oracle property and the asymptotic normality of the RAL. Simulations findings and real data examples also demonstrate the better performance of the proposed penalized regression approach. PMID:28846717

  19. Analysis of competitive equilibrium in an infinite dimensional ...

    African Journals Online (AJOL)

    This paper considered the cost of allocated goods and attaining maximal utility with such price in the finite dimensional commodity space and observed that there exist an equilibrium price. It goes further to establish that in an infinite dimensional commodity space with subsets as consumption and production set there exist a ...

  20. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    Science.gov (United States)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  1. A study on three dimensional layout design by the simulated annealing method

    International Nuclear Information System (INIS)

    Jang, Seung Ho

    2008-01-01

    Modern engineered products are becoming increasingly complicated and most consumers prefer compact designs. Layout design plays an important role in many engineered products. The objective of this study is to suggest a method to apply the simulated annealing method to the arbitrarily shaped three-dimensional component layout design problem. The suggested method not only optimizes the packing density but also satisfies constraint conditions among the components. The algorithm and its implementation as suggested in this paper are extendable to other research objectives

  2. Development of three-dimensional transport code by the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1985-01-01

    Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)

  3. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  4. Chemometric analysis of MALDI mass spectrometric images of three-dimensional cell culture systems.

    Science.gov (United States)

    Weaver, Eric M; Hummon, Amanda B; Keithley, Richard B

    2015-09-07

    As imaging mass spectrometry (IMS) has grown in popularity in recent years, the applications of this technique have become increasingly diverse. Currently there is a need for sophisticated data processing strategies that maximize the information gained from large IMS data sets. Traditional two-dimensional heat maps of single ions generated in IMS experiments lack analytical detail, yet manual analysis of multiple peaks across hundreds of pixels within an entire image is time-consuming, tedious and subjective. Here, various chemometric methods were used to analyze data sets obtained by matrix-assisted laser desorption/ionization (MALDI) IMS of multicellular spheroids. HT-29 colon carcinoma multicellular spheroids are an excellent in vitro model system that mimic the three dimensional morphology of tumors in vivo . These data are especially challenging to process because, while different microenvironments exist, the cells are clonal which can result in strong similarities in the mass spectral profiles within the image. In this proof-of-concept study, a combination of principal component analysis (PCA), clustering methods, and linear discriminant analysis was used to identify unique spectral features present in spatially heterogeneous locations within the image. Overall, the application of these exploratory data analysis tools allowed for the isolation and detection of proteomic changes within IMS data sets in an easy, rapid, and unsupervised manner. Furthermore, a simplified, non-mathematical theoretical introduction to the techniques is provided in addition to full command routines within the MATLAB programming environment, allowing others to easily utilize and adapt this approach.

  5. Two-dimensional analysis of shape-fabric using projections of digitized lines in a plane

    Science.gov (United States)

    Panozzo, Renée H.

    1983-06-01

    A method of two-dimensional shape-fabric analysis is presented which is based on the projection of lines or ellipses on the x-axis. If a set of lines or ellipses displays a preferred orientation, the average length of projection of the lines or ellipses on the x-axis changes with direction in the x- y plane. The proposed method involves use of Fortran programs which evaluate the average length of projection of lines (e.g., outlines of ooides, pressure solution seams) while the fabric is rotated by small increments in a counterclockwise sense about the origin of the x - y plane. Data acquisition is by digitizing the lines on an enlarged photograph. The sets of x- y coordinates that are thus obtained serve as input for the programs. The basic equations of the method are explained and application to a sedimentary fabric is demonstrated. The proposed method is both fast and sensitive. It is especially useful if the particle outlines are closely spherical or if the fabric is only weakly developed. Yet, the method is not restricted to weak fabrics only. Also, if the fabric is due to deformation this method yields the axes of the two-dimensional strain ellipse.

  6. Flows method in global analysis

    International Nuclear Information System (INIS)

    Duong Minh Duc.

    1994-12-01

    We study the gradient flows method for W r,p (M,N) where M and N are Riemannian manifold and r may be less than m/p. We localize some global analysis problem by constructing gradient flows which only change the value of any u in W r,p (M,N) in a local chart of M. (author). 24 refs

  7. A novel three-dimensional mesh deformation method based on sphere relaxation

    International Nuclear Information System (INIS)

    Zhou, Xuan; Li, Shuixiang

    2015-01-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations

  8. A novel three-dimensional mesh deformation method based on sphere relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xuan [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China); Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China); Li, Shuixiang, E-mail: lsx@pku.edu.cn [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China)

    2015-10-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations.

  9. Three-dimensional reconstruction volume: a novel method for volume measurement in kidney cancer.

    Science.gov (United States)

    Durso, Timothy A; Carnell, Jonathan; Turk, Thomas T; Gupta, Gopal N

    2014-06-01

    The role of volumetric estimation is becoming increasingly important in the staging, management, and prognostication of benign and cancerous conditions of the kidney. We evaluated the use of three-dimensional reconstruction volume (3DV) in determining renal parenchymal volumes (RPV) and renal tumor volumes (RTV). We compared 3DV with the currently available methods of volume assessment and determined its interuser reliability. RPV and RTV were assessed in 28 patients who underwent robot-assisted laparoscopic partial nephrectomy for kidney cancer. Patients with a preoperative creatinine level of kidney pre- and postsurgery overestimated 3D reconstruction volumes by 15% to 102% and 12% to 101%, respectively. In addition, volumes obtained from 3DV displayed high interuser reliability regardless of experience. 3DV provides a highly reliable way of assessing kidney volumes. Given that 3DV takes into account visible anatomy, the differences observed using previously published methods can be attributed to the failure of geometry to accurately approximate kidney or tumor shape. 3DV provides a more accurate, reproducible, and clinically useful tool for urologists looking to improve patient care using analysis related to volume.

  10. Quantitative Multifactor Dimensionality Reduction Method for Detecting Gene-Gene Interaction

    Science.gov (United States)

    Gilbert-Diamond, Diane; Andrews, Peter; Moore, Jason H.; Gui, Jiang

    2013-01-01

    Background and Objective: The problem of identifying SNP-SNP interactions in case-control studies has been studied extensively and a number of new techniques have been developed. Little progress has been made, however in the analysis of SNP-SNP interactions in relation to continuous data. Methods: We present an extension of the two class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of Quantitative trait. The proposed Quantitative MDR (Quant-MDR) method handles continuous data by modifying MDR's constructive induction algorithm to use T Test. Results: We then applied Quant-MDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. We identified that BR2_58CT&ATR1AC is the top SNP-SNP interaction that assciated with Tissue plasminogen activator (tPA) level for male and ACEID&BRB2EX1 is the top interaction for female tPA expression. Discussion and Conclusions: Quant-MDR is capable of detecting interaction models with weak main effects. These epistatic models tend to be dropped by traditional linear regression approaches. With improved efficiency to handle genome wide datasets, Quant-MDR will play an important role in a research strategy that embraces the complexity of the genotype-phenotype mapping relationship.

  11. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    Science.gov (United States)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  12. Crater ejecta scaling laws: fundamental forms based on dimensional analysis

    International Nuclear Information System (INIS)

    Housen, K.R.; Schmidt, R.M.; Holsapple, K.A.

    1983-01-01

    A model of crater ejecta is constructed using dimensional analysis and a recently developed theory of energy and momentum coupling in cratering events. General relations are derived that provide a rationale for scaling laboratory measurements of ejecta to larger events. Specific expressions are presented for ejection velocities and ejecta blanket profiles in two limiting regimes of crater formation: the so-called gravity and strength regimes. In the gravity regime, ejectra velocities at geometrically similar launch points within craters vary as the square root of the product of crater radius and gravity. This relation implies geometric similarity of ejecta blankets. That is, the thickness of an ejecta blanket as a function of distance from the crater center is the same for all sizes of craters if the thickness and range are expressed in terms of crater radii. In the strength regime, ejecta velocities are independent of crater size. Consequently, ejecta blankets are not geometrically similar in this regime. For points away from the crater rim the expressions for ejecta velocities and thickness take the form of power laws. The exponents in these power laws are functions of an exponent, α, that appears in crater radius scaling relations. Thus experimental studies of the dependence of crater radius on impact conditions determine scaling relations for ejecta. Predicted ejection velocities and ejecta-blanket profiles, based on measured values of α, are compared to existing measurements of velocities and debris profiles

  13. Three-dimensional cephalometric analysis in orthodontics: a systematic review.

    Science.gov (United States)

    Pittayapat, P; Limchaichana-Bolstad, N; Willems, G; Jacobs, R

    2014-05-01

    The scientific evidence of 3D cephalometry in orthodontics has not been well established. The aim of this systematic review was to evaluate the evidence for the diagnostic efficacy of 3D cephalometry in orthdontics, focusing on measurement accuracy and reproducibility of landmark identification. PubMed, EMBASE and the Cochrane library (from beginning to March 13, 2012) were searched. Search terms included: cone-beam computed tomography; tomography, spiral computed; imaging, three-dimensional; orthodontics. Two reviewers read the retrieved articles and selected relevant publications based on pre-established inclusion criteria. The selected publications had to elucidate the hierarchical model of the efficacy of diagnostic imaging systems by Fryback and Thornbury. The data was then extracted according to two protocols, which were based on the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. Next, levels of evidence were categorized into 3 groups: low, moderate and high evidence. 571 publications were found by database search strategies and 50 additional studies by hand search. A total of 35 publications were included in this review. Limited evidence for the diagnostic efficacy of 3D cephalometry was found. Only 6 studies met the criteria for a moderate level of evidence. Accordingly, this systematic review reveals that there is still need for methodologically standardized studies on 3D cephalometric analysis. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Dimensional analysis, similarity, analogy, and the simulation theory

    International Nuclear Information System (INIS)

    Davis, A.A.

    1978-01-01

    Dimensional analysis, similarity, analogy, and cybernetics are shown to be four consecutive steps in application of the simulation theory. This paper introduces the classes of phenomena which follow the same formal mathematical equations as models of the natural laws and the interior sphere of restraints groups of phenomena in which one can introduce simplfied nondimensional mathematical equations. The simulation by similarity in a specific field of physics, by analogy in two or more different fields of physics, and by cybernetics in nature in two or more fields of mathematics, physics, biology, economics, politics, sociology, etc., appears as a unique theory which permits one to transport the results of experiments from the models, convenably selected to meet the conditions of researches, constructions, and measurements in the laboratories to the originals which are the primary objectives of the researches. Some interesting conclusions which cannot be avoided in the use of simplified nondimensional mathematical equations as models of natural laws are presented. Interesting limitations on the use of simulation theory based on assumed simplifications are recognized. This paper shows as necessary, in scientific research, that one write mathematical models of general laws which will be applied to nature in its entirety. The paper proposes the extent of the second law of thermodynamics as the generalized law of entropy to model life and its activities. This paper shows that the physical studies and philosophical interpretations of phenomena and natural laws cannot be separated in scientific work; they are interconnected and one cannot be put above the others

  15. Three-dimensional analysis of anisotropic spatially reinforced structures

    Science.gov (United States)

    Bogdanovich, Alexander E.

    1993-01-01

    The material-adaptive three-dimensional analysis of inhomogeneous structures based on the meso-volume concept and application of deficient spline functions for displacement approximations is proposed. The general methodology is demonstrated on the example of a brick-type mosaic parallelepiped arbitrarily composed of anisotropic meso-volumes. A partition of each meso-volume into sub-elements, application of deficient spline functions for a local approximation of displacements and, finally, the use of the variational principle allows one to obtain displacements, strains, and stresses at anypoint within the structural part. All of the necessary external and internal boundary conditions (including the conditions of continuity of transverse stresses at interfaces between adjacent meso-volumes) can be satisfied with requisite accuracy by increasing the density of the sub-element mesh. The application of the methodology to textile composite materials is described. Several numerical examples for woven and braided rectangular composite plates and stiffened panels under transverse bending are considered. Some typical effects of stress concentrations due to the material inhomogeneities are demonstrated.

  16. Energy analysis of four dimensional extended hyperbolic Scarf I plus three dimensional separable trigonometric noncentral potentials using SUSY QM approach

    International Nuclear Information System (INIS)

    Suparmi, A.; Cari, C.; Deta, U. A.; Handhika, J.

    2016-01-01

    The non-relativistic energies and wave functions of extended hyperbolic Scarf I plus separable non-central shape invariant potential in four dimensions are investigated using Supersymmetric Quantum Mechanics (SUSY QM) Approach. The three dimensional separable non-central shape invariant angular potential consists of trigonometric Scarf II, Manning Rosen and Poschl-Teller potentials. The four dimensional Schrodinger equation with separable shape invariant non-central potential is reduced into four one dimensional Schrodinger equations through variable separation method. By using SUSY QM, the non-relativistic energies and radial wave functions are obtained from radial Schrodinger equation, the orbital quantum numbers and angular wave functions are obtained from angular Schrodinger equations. The extended potential means there is perturbation terms in potential and cause the decrease in energy spectra of Scarf I potential. (paper)

  17. Hyperspectral Dimensionality Reduction by Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Lei Pan

    2017-05-01

    Full Text Available Recently, sparse and low-rank graph-based discriminant analysis (SLGDA has yielded satisfactory results in hyperspectral image (HSI dimensionality reduction (DR, for which sparsity and low-rankness are simultaneously imposed to capture both local and global structure of hyperspectral data. However, SLGDA fails to exploit the spatial information. To address this problem, a tensor sparse and low-rank graph-based discriminant analysis (TSLGDA is proposed in this paper. By regarding the hyperspectral data cube as a third-order tensor, small local patches centered at the training samples are extracted for the TSLGDA framework to maintain the structural information, resulting in a more discriminative graph. Subsequently, dimensionality reduction is performed on the tensorial training and testing samples to reduce data redundancy. Experimental results of three real-world hyperspectral datasets demonstrate that the proposed TSLGDA algorithm greatly improves the classification performance in the low-dimensional space when compared to state-of-the-art DR methods.

  18. Analysis of two-dimensional flow of epoxy fluids through woven glass fabric

    International Nuclear Information System (INIS)

    Schutz, J.B.; Smith, K.B.

    1997-01-01

    Fabrication of magnet coils for the International Thermonuclear Experimental Reactor will require vacuum pressure impregnation of epoxy resin into the glass fabric of the insulation system. Flow of a fluid through a packed bed of woven glass fabric is extremely complicated, and semiempirical methods must be used to analyze these flows. The previous one-dimensional model has been modified for analysis of two-dimensional isotropic flow of epoxy resins through woven glass fabric. Several two-dimensional flow experiments were performed to validate the analysis, and to determine permeabilities of several fabric weave types. The semiempirical permeability is shown to be a characteristic of the fabric weave, and once determined, may be used to analyze flow of fluids of differing viscosities. Plain weave has a lower permeability than satin weave fabric, possibly due to the increased tortuosity of the preferential flow paths along fiber tows. A flow radius of approximately 2 meters through satin weave fabric is predicted for fluid viscosities of 0.10 Pa s (100 cps) in 20 hours, characteristic of VPI resins

  19. Quantitative analysis of facial palsy using a three-dimensional facial motion measurement system.

    Science.gov (United States)

    Katsumi, Sachiyo; Esaki, Shinichi; Hattori, Koosuke; Yamano, Koji; Umezaki, Taizo; Murakami, Shingo

    2015-08-01

    The prognosis for facial nerve palsy (FNP) depends on its severity. Currently, many clinicians use the Yanagihara, House-Brackmann, and/or Sunnybrook grading systems to assess FNP. Although these assessments are performed by experts, inter- and intra-observer disagreements have been demonstrated. The quantitative and objective analyses of the degree of FNP would be preferred to monitor functional changes and to plan and evaluate therapeutic interventions in patients with FNP. Numerous two-dimensional (2-D) assessments have been proposed, however, the limitations of 2-D assessment have been reported. The purpose of this study was to introduce a three-dimensional (3-D) image generation system for the analysis of facial nerve palsy (FNP) and to show the correlation between the severity of FNP assessed by this method and two conventional systems. Five independent facial motions, resting, eyebrow raise, gentle eye closure, full smile with lips open and whistling were recorded with our system and the images were then analyzed using our software. The regional and gross facial symmetries were analyzed. The predicted scores were calculated and compared to the Yanagihara and H-B grading scores. We analyzed 15 normal volunteers and 42 patients with FNP. The results showed that 3-D analysis could measure mouth movement in the anteroposterior direction, whereas two-dimensional analysis could not. The system results showed good correlation with the clinical results from the Yanagihara (r(2)=0.86) and House-Brackmann (r(2)=0.81) grading scales. This objective method can produce consistent results that align with two conventional systems. Therefore, this method is ideally suited for use in a routine clinical setting. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.