WorldWideScience

Sample records for isotropic computational analysis

  1. Interactively variable isotropic resolution in computed tomography

    International Nuclear Information System (INIS)

    Lapp, Robert M; Kyriakou, Yiannis; Kachelriess, Marc; Wilharm, Sylvia; Kalender, Willi A

    2008-01-01

    An individual balancing between spatial resolution and image noise is necessary to fulfil the diagnostic requirements in medical CT imaging. In order to change influencing parameters, such as reconstruction kernel or effective slice thickness, additional raw-data-dependent image reconstructions have to be performed. Therefore, the noise versus resolution trade-off is time consuming and not interactively applicable. Furthermore, isotropic resolution, expressed by an equivalent point spread function (PSF) in every spatial direction, is important for the undistorted visualization and quantitative evaluation of small structures independent of the viewing plane. Theoretically, isotropic resolution can be obtained by matching the in-plane and through-plane resolution with the aforementioned parameters. Practically, however, the user is not assisted in doing so by current reconstruction systems and therefore isotropic resolution is not commonly achieved, in particular not at the desired resolution level. In this paper, an integrated approach is presented for equalizing the in-plane and through-plane spatial resolution by image filtering. The required filter kernels are calculated from previously measured PSFs in x/y- and z-direction. The concepts derived are combined with a variable resolution filtering technique. Both approaches are independent of CT raw data and operate only on reconstructed images which allows for their application in real time. Thereby, the aim of interactively variable, isotropic resolution is achieved. Results were evaluated quantitatively by measuring PSFs and image noise, and qualitatively by comparing the images to direct reconstructions regarded as the gold standard. Filtered images matched direct reconstructions with arbitrary reconstruction kernels with standard deviations in difference images of typically between 1 and 17 HU. Isotropic resolution was achieved within 5% of the selected resolution level. Processing times of 20-100 ms per frame

  2. Computations of Quasiconvex Hulls of Isotropic Sets

    Czech Academy of Sciences Publication Activity Database

    Heinz, S.; Kružík, Martin

    2017-01-01

    Roč. 24, č. 2 (2017), s. 477-492 ISSN 0944-6532 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985556 Keywords : quasiconvexity * isotropic compact sets * matrices Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.496, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/kruzik-0474874.pdf

  3. Visualization and computer graphics on isotropically emissive volumetric displays.

    Science.gov (United States)

    Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S

    2009-01-01

    The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

  4. direct method of analysis of an isotropic rectangular plate direct

    African Journals Online (AJOL)

    eobe

    This work evaluates the static analysis of an isotropic rectangular plate with various the static analysis ... method according to Ritz is used to obtain the total potential energy of the plate by employing the used to ..... for rectangular plates analysis, as the behavior of the ... results obtained by previous research work that used.

  5. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    Science.gov (United States)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  6. Computer simulation of model cohesive powders: Plastic consolidation, structural changes and elasticity under isotropic loads

    OpenAIRE

    Gilabert, Francisco; Roux, Jean-Noël; Castellanos, Antonio

    2008-01-01

    International audience; The quasistatic behavior of a simple 2D model of a cohesive powder under isotropic loads is investigated by Discrete Element simulations. The loose packing states, as studied in a previous paper, undergo important structural changes under growing confining pressure P, while solid fraction \\Phi irreversibly increases by large amounts. The system state goes through three stages, with different forms of the plastic consolidation curve \\Phi(P*), under growing reduced press...

  7. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    Science.gov (United States)

    Gao, Zhiwen; Zhou, Youhe

    2015-04-01

    Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.

  8. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  9. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    International Nuclear Information System (INIS)

    Gao, Zhiwen; Zhou, Youhe

    2015-01-01

    Highlights: • We studied fracture problem in HTS based on real fundamental solutions. • When the thickness of HTS strip increases the SIF decrease. • A higher applied field leads to a larger stress intensity factor. • The greater the critical current density is, the smaller values of the SIF is. - Abstract: Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E–J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss–Lobatto–Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed

  10. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    Science.gov (United States)

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present

  11. A computer-aided system for automatic extraction of femur neck trabecular bone architecture using isotropic volume construction from clinical hip computed tomography images.

    Science.gov (United States)

    Vivekanandhan, Sapthagirivasan; Subramaniam, Janarthanam; Mariamichael, Anburajan

    2016-10-01

    Hip fractures due to osteoporosis are increasing progressively across the globe. It is also difficult for those fractured patients to undergo dual-energy X-ray absorptiometry scans due to its complicated protocol and its associated cost. The utilisation of computed tomography for the fracture treatment has become common in the clinical practice. It would be helpful for orthopaedic clinicians, if they could get some additional information related to bone strength for better treatment planning. The aim of our study was to develop an automated system to segment the femoral neck region, extract the cortical and trabecular bone parameters, and assess the bone strength using an isotropic volume construction from clinical computed tomography images. The right hip computed tomography and right femur dual-energy X-ray absorptiometry measurements were taken from 50 south-Indian females aged 30-80 years. Each computed tomography image volume was re-constructed to form isotropic volumes. An automated system by incorporating active contour models was used to segment the neck region. A minimum distance boundary method was applied to isolate the cortical and trabecular bone components. The trabecular bone was enhanced and segmented using trabecular enrichment approach. The cortical and trabecular bone features were extracted and statistically compared with dual-energy X-ray absorptiometry measured femur neck bone mineral density. The extracted bone measures demonstrated a significant correlation with neck bone mineral density (r > 0.7, p computed tomography images scanned with low dose could eventually be helpful in osteoporosis diagnosis and its treatment planning. © IMechE 2016.

  12. Conditional analysis near strong shear layers in DNS of isotropic turbulence at high Reynolds number

    Energy Technology Data Exchange (ETDEWEB)

    Ishihara, Takashi; Kaneda, Yukio [Graduate School of Engineering, Nagoya University (Japan); Hunt, Julian C R, E-mail: ishihara@cse.nagoya-u.ac.jp [University College of London (United Kingdom)

    2011-12-22

    Data analysis of high resolution DNS of isotropic turbulence with the Taylor scale Reynolds number R{sub {lambda}} = 1131 shows that there are thin shear layers consisting of a cluster of strong vortex tubes with typical diameter of order 10{eta}, where {eta} is the Kolmogorov length scale. The widths of the layers are of the order of the Taylor micro length scale. According to the analysis of one of the layers, coarse grained vorticity in the layer are aligned approximately in the plane of the layer so that there is a net mean shear across the layer with a mean velocity jump of the order of the root-mean-square of the fluctuating velocity, and energy dissipation averaged over the layer is larger than ten times the average over the whole flow. The mean and the standard deviation of the energy transfer T(x, {kappa}) from scales larger than 1/{kappa} to scales smaller than 1/{kappa} at position x are largest within the layers (where the most intense vortices and dissipation occur), but are also large just outside the layers (where viscous stresses are weak), by comparison with the average values of T over the whole region. The DNS data are consistent with exterior fluctuation being damped/filtered at the interface of the layer and then selectively amplified within the layer.

  13. Multiscale analysis of the invariants of the velocity gradient tensor in isotropic turbulence

    Science.gov (United States)

    Danish, Mohammad; Meneveau, Charles

    2018-04-01

    Knowledge of local flow-topology, the patterns of streamlines around a moving fluid element as described by the velocity-gradient tensor, is useful for developing insights into turbulence processes, such as energy cascade, material element deformation, or scalar mixing. Much has been learned in the recent past about flow topology at the smallest (viscous) scales of turbulence. However, less is known at larger scales, for instance, at the inertial scales of turbulence. In this work, we present a detailed study on the scale dependence of various quantities of interest, such as the population fraction of different types of flow-topologies, the joint probability distribution of the second and third invariants of the velocity gradient tensor, and the geometrical alignment of vorticity with strain-rate eigenvectors. We perform the analysis on a simulation dataset of isotropic turbulence at Reλ=433 . While quantities appear close to scale invariant in the inertial range, we observe a "bump" in several quantities at length scales between the inertial and viscous ranges. For instance, the population fraction of unstable node-saddle-saddle flow topology shows an increase when reducing the scale from the inertial entering the viscous range. A similar bump is observed for the vorticity-strain-rate alignment. In order to document possible dynamical causes for the different trends in the viscous and inertial ranges, we examine the probability fluxes appearing in the Fokker-Plank equation governing the velocity gradient invariants. Specifically, we aim to understand whether the differences observed between the viscous and inertial range statistics are due to effects caused by pressure, subgrid-scale, or viscous stresses or various combinations of these terms. To decompose the flow into small and large scales, we mainly use a spectrally compact non-negative filter with good spatial localization properties (Eyink-Aluie filter). The analysis shows that when going from the inertial

  14. Coupled thermal stress analysis of a hollow circular cylinder with transversely isotropic properties

    International Nuclear Information System (INIS)

    Tanigawa, Y.; Ootao, Y.

    1987-01-01

    If we shall analyze the thermal stress problems exactly in a transient state in continuum media, discussed with both the coupling and inertia effect, it has be shown that the thermomechanical coupling term shows a significant role than the inertia term for the common commercial alloys. In the present paper, we have considered the continuum medium with transversely isotropic material property, which has an isotropic property in r-θ plane, and analyzed the transient thermal stress problem of an infinitely long hollow circular cylinder due to an axisymmetrical partial heating. In order to get the thermal and thermoelastic fundamental differential equations separated in each field, we have introduced a perturbation technique. And then, we have carried out numerical calculations for several values of thermal and thermoelastic orthotropical parameters. (orig./GL)

  15. First-arrival traveltime computation for quasi-P waves in 2D transversely isotropic media using Fermat’s principle-based fast marching

    Science.gov (United States)

    Hu, Jiangtao; Cao, Junxing; Wang, Huazhong; Wang, Xingjian; Jiang, Xudong

    2017-12-01

    First-arrival traveltime computation for quasi-P waves in transversely isotropic (TI) media is the key component of tomography and depth migration. It is appealing to use the fast marching method in isotropic media as it efficiently computes traveltime along an expanding wavefront. It uses the finite difference method to solve the eikonal equation. However, applying the fast marching method in anisotropic media faces challenges because the anisotropy introduces additional nonlinearity in the eikonal equation and solving this nonlinear eikonal equation with the finite difference method is challenging. To address this problem, we present a Fermat’s principle-based fast marching method to compute traveltime in two-dimensional TI media. This method is applicable in both vertical and tilted TI (VTI and TTI) media. It computes traveltime along an expanding wavefront using Fermat’s principle instead of the eikonal equation. Thus, it does not suffer from the nonlinearity of the eikonal equation in TI media. To compute traveltime using Fermat’s principle, the explicit expression of group velocity in TI media is required to describe the ray propagation. The moveout approximation is adopted to obtain the explicit expression of group velocity. Numerical examples on both VTI and TTI models show that the traveltime contour obtained by the proposed method matches well with the wavefront from the wave equation. This shows that the proposed method could be used in depth migration and tomography.

  16. Numerical analysis of strain localization for transversely isotropic model with non-coaxial flow rule

    Science.gov (United States)

    Wei, Ding; Cong-cong, Yu; Chen-hui, Wu; Zheng-yi, Shu

    2018-03-01

    To analyse the strain localization behavior of geomaterials, the forward Euler schemes and the tangent modulus matrix are formulated based on the transversely isotropic yield criterion with non-coaxial flow rule developed by Lade, the program code is implemented based on the user subroutine (UMAT) of ABAQUS. The influence of the material principal direction on the strain localization and the bearing capacity of the structure are investigated and analyzed. Numerical results show the validity and performance of the proposed model in simulating the strain localization behavior of geostructures.

  17. Multi-component pre-stack time-imaging and migration-based velocity analysis in transversely isotropic media; Imagerie sismique multicomposante et analyse de vitesse de migration en milieu transverse isotrope

    Energy Technology Data Exchange (ETDEWEB)

    Gerea, C.V.

    2001-06-01

    Complementary to the recording of compressional (P-) waves, the observation of P-S converted waves has recently been receiving specific attention. This is mainly due to their tremendous potential as a tool for fracture and lithology characterization, imaging sediments in gas saturated rocks, and imaging shallow sediments with higher resolution than conventional P-P data. In a conventional marine seismic survey, we cannot record P-to-S converted-wave energy since the fluids cannot support shear-wave strain. Thus, to capture the converted-wave energy, we need to record it at the water-bottom casing an ocean-bottom cable (OBC). The S-waves recorded at the seabed are mainly converted from P to S (i.e., PS-waves or C-waves) at the subsurface reflectors. The most accurate way to image seismic data is pre-stack depth migration. In this thesis, I develop a numerically efficient 2.5-D true-amplitude elastic Kirchhoff pre-stack migration algorithm designed to handle OBC data gathered along a single line. All the kinematic and dynamic elastic Green's functions required in the computation of true-amplitude weight term of Kirchhoff summation, are based on the non-hyperbolic explicit approximations of P- and SV-wave travel-times in layered transversely isotropic (VTI) media. Hence, this elastic imaging algorithm is very well-suited for migration-based velocity analysis techniques, for which fast, robust and iterative pre-stack migration is desired. In this thesis, I approach also the topic of anisotropic velocity model building for elastic pre-stack time-imaging. and propose an original methodology for joint PP-PS migration-based velocity analysis (MVA) in layered VTI anisotropic media. Tests on elastic synthetic and real OBC seismic data ascertain the validity of the pre-stack migration algorithm and velocity analysis methodology. (author)

  18. Computational movement analysis

    CERN Document Server

    Laube, Patrick

    2014-01-01

    This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi

  19. Migration velocity analysis using a transversely isotropic medium with tilt normal to the reflector dip

    KAUST Repository

    Alkhalifah, T.

    2010-06-13

    A transversely isotropic model in which the tilt is constrained to be normal to the dip (DTI model) allows for simplifications in the imaging and velocity model building efforts as compared to a general TTI model. Though this model, in some cases, can not be represented physically like in the case of conflicting dips, it handles all dips with the assumption of symmetry axis normal to the dip. It provides a process in which areas that meet this feature is handled properly. We use efficient downward continuation algorithms that utilizes the reflection features of such a model. For lateral inhomogeneity, phase shift migration can be easily extended to approximately handle lateral inhomogeneity, because unlike the general TTI case the DTI model reduces to VTI for zero dip. We also equip these continuation algorithms with tools that expose inaccuracies in the velocity. We test this model on synthetic data of general TTI nature and show its resilience even couping with complex models like the recently released anisotropic BP model.

  20. Numerical Analysis of Stress Concentration in Isotropic and Laminated Plates with Inclined Elliptical Holes

    Science.gov (United States)

    Khechai, Abdelhak; Tati, Abdelouahab; Belarbi, Mohamed Ouejdi; Guettala, Abdelhamid

    2018-03-01

    The design of high-performance composite structures frequently includes discontinuities to reduce the weight and fastener holes for joining. Understanding the behavior of perforated laminates is necessary for structural design. In the current work, stress concentrations taking place in laminated and isotropic plates subjected to tensile load are investigated. The stress concentrations are obtained using a recent quadrilateral finite element of four nodes with 32 DOFs. The present finite element (PE) is a combination of two finite elements. The first finite element is a linear isoparametric membrane element and the second is a high precision Hermitian element. One of the essential objectives of the current investigation is to confirm the capability and efficiency of the PE for stress determination in perforated laminates. Different geometric parameters, such as the cutout form, sizes and cutout orientations, which have a considerable effect on the stress values, are studied. Using the present finite element formulation, the obtained results are found to be in good agreement with the analytical findings, which validates the capability and the efficiency of the proposed formulation. Finally, to understand the material parameters effect such as the orientation of fibers and degree of orthotropy ratio on the stress values, many figures are presented using different ellipse major to minor axis ratio. The stress concentration values are considerably affected by increasing the orientation angle of the fibers and degree of orthotropy.

  1. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-01

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described

  2. Computing dispersion curves of elastic/viscoelastic transversely-isotropic bone plates coupled with soft tissue and marrow using semi-analytical finite element (SAFE) method.

    Science.gov (United States)

    Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H

    2017-08-01

    We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Comparison between 3D isotropic and 2D conventional MR arthrography for diagnosing rotator cuff tear and labral lesions: A meta-analysis.

    Science.gov (United States)

    Lee, Sun Hwa; Yun, Seong Jong; Jin, Wook; Park, So Young; Park, Ji Seon; Ryu, Kyung Nam

    2018-03-30

    Although 3D-isotropic MR arthrography has been characterized as a substitute imaging tool for rotator cuff tear (RCT) and labral lesions, it has not been commonly used in clinical practice because of controversy related to image blurring and indistinctness of structural edges. To perform a comparison of the diagnostic performance of 3D-isotropic MR arthrography and 2D-conventional MR arthrography for diagnosis of RCT (solely RCT, full/partial-thickness supraspinatus [SST]-infraspinatus [IST] tear, or subscapularis [SSc] tear) and labral lesions. Meta-analysis. Patients with shoulder pain. 3D-isotropic and 2D-conventional MR arthrography at 3.0T or 1.5T. PubMed and EMBASE were searched following the PRISMA guidelines. Bivariate modeling and hierarchical summary receiver operating characteristic modeling were performed to compare the overall diagnostic performance of 3D-isotropic and 2D-conventional MR arthrography. Multiple-subgroup analyses were performed for diagnosing RCT, full/partial-thickness SST-IST tear, SSc tear, and labral lesions. Meta-regression analyses were performed according to subject, study, and MR arthrography characteristics including 3D-isotropic sequences (turbo spine echo [TSE] vs. gradient echo [GRE]). Eleven studies (825 patients) were included. Overall, 3D-isotropic MR arthrography had similar pooled sensitivity (0.90 [95% CI, 0.87-0.93]) (P = 0.95) and specificity (0.92 [95% CI, 0.87-0.95]) (P = 0.99), relative to 2D-conventional MR arthrography (sensitivity, 0.91 [95% CI, 0.86-0.94]); specificity, 0.92 [95% CI, 0.87-0.95]). Multiple-subgroup analyses showed that sensitivities (P = 0.13-0.91) and specificities (P = 0.26-0.99) on 3D-isotropic MR arthrography for diagnosing RCT, full/partial-thickness SST-IST tear, SSC tear, and labral lesions were not significantly different from 2D-conventional MR arthrography. On meta-regression analysis, 3D-TSE sequence demonstrated higher sensitivity (P 3D-GRE for RCT and labral

  4. The isotropic radio background revisited

    Energy Technology Data Exchange (ETDEWEB)

    Fornengo, Nicolao; Regis, Marco [Dipartimento di Fisica Teorica, Università di Torino, via P. Giuria 1, I–10125 Torino (Italy); Lineros, Roberto A. [Instituto de Física Corpuscular – CSIC/U. Valencia, Parc Científic, calle Catedrático José Beltrán, 2, E-46980 Paterna (Spain); Taoso, Marco, E-mail: fornengo@to.infn.it, E-mail: rlineros@ific.uv.es, E-mail: regis@to.infn.it, E-mail: taoso@cea.fr [Institut de Physique Théorique, CEA/Saclay, F-91191 Gif-sur-Yvette Cédex (France)

    2014-04-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.

  5. The isotropic radio background revisited

    International Nuclear Information System (INIS)

    Fornengo, Nicolao; Regis, Marco; Lineros, Roberto A.; Taoso, Marco

    2014-01-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky

  6. Computational Music Analysis

    DEFF Research Database (Denmark)

    This book provides an in-depth introduction and overview of current research in computational music analysis. Its seventeen chapters, written by leading researchers, collectively represent the diversity as well as the technical and philosophical sophistication of the work being done today...... on well-established theories in music theory and analysis, such as Forte's pitch-class set theory, Schenkerian analysis, the methods of semiotic analysis developed by Ruwet and Nattiez, and Lerdahl and Jackendoff's Generative Theory of Tonal Music. The book is divided into six parts, covering...... music analysis, the book provides an invaluable resource for researchers, teachers and students in music theory and analysis, computer science, music information retrieval and related disciplines. It also provides a state-of-the-art reference for practitioners in the music technology industry....

  7. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence

    Science.gov (United States)

    Linkmann, Moritz; McComb, W. David; Yoffe, Samuel; Berera, Arjun

    2014-11-01

    The pseudospectral method, in conjunction with a new technique for obtaining scaling exponents ζn from the structure functions Sn (r) , is presented as an alternative to the extended self-similarity (ESS) method and the use of generalized structure functions. We propose plotting the ratio | Sn (r) /S3 (r) | against the separation r in accordance with a standard technique for analysing experimental data. This method differs from the ESS technique, which plots the generalized structure functions Gn (r) against G3 (r) , where G3 (r) ~ r . Using our method for the particular case of S2 (r) we obtain the new result that the exponent ζ2 decreases as the Taylor-Reynolds number increases, with ζ2 --> 0 . 679 +/- 0 . 013 as Rλ --> ∞ . This supports the idea of finite-viscosity corrections to the K41 prediction for S2, and is the opposite of the result obtained by ESS. The pseudospectral method permits the forcing to be taken into account exactly through the calculation of the energy input in real space from the work spectrum of the stirring forces. The combination of the viscous and the forcing corrections as calculated by the pseudospectral method is shown to account for the deviation of S3 from Kolmogorov's ``four-fifths''-law at all scales. This work has made use of the resources provided by the UK supercomputing service HECToR, made available through the Edinburgh Compute and Data Facility (ECDF). A. B. is supported by STFC, S. R. Y. and M. F. L. are funded by EPSRC.

  8. Mapping of moveout in tilted transversely isotropic media

    KAUST Repository

    Stovas, A.; Alkhalifah, Tariq Ali

    2013-01-01

    The computation of traveltimes in a transverse isotropic medium with a tilted symmetry axis tilted transversely isotropic is very important both for modelling and inversion. We develop a simple analytical procedure to map the traveltime function from a transverse isotropic medium with a vertical symmetry axis (vertical transversely isotropic) to a tilted transversely isotropic medium by applying point-by-point mapping of the traveltime function. This approach can be used for kinematic modelling and inversion in layered tilted transversely isotropic media. © 2013 European Association of Geoscientists & Engineers.

  9. Mapping of moveout in tilted transversely isotropic media

    KAUST Repository

    Stovas, A.

    2013-09-09

    The computation of traveltimes in a transverse isotropic medium with a tilted symmetry axis tilted transversely isotropic is very important both for modelling and inversion. We develop a simple analytical procedure to map the traveltime function from a transverse isotropic medium with a vertical symmetry axis (vertical transversely isotropic) to a tilted transversely isotropic medium by applying point-by-point mapping of the traveltime function. This approach can be used for kinematic modelling and inversion in layered tilted transversely isotropic media. © 2013 European Association of Geoscientists & Engineers.

  10. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  11. Analysis of the traveltime sensitivity kernels for an acoustic transversely isotropic medium with a vertical axis of symmetry

    KAUST Repository

    Djebbi, Ramzi

    2016-02-05

    In anisotropic media, several parameters govern the propagation of the compressional waves. To correctly invert surface recorded seismic data in anisotropic media, a multi-parameter inversion is required. However, a tradeoff between parameters exists because several models can explain the same dataset. To understand these tradeoffs, diffraction/reflection and transmission-type sensitivity-kernels analyses are carried out. Such analyses can help us to choose the appropriate parameterization for inversion. In tomography, the sensitivity kernels represent the effect of a parameter along the wave path between a source and a receiver. At a given illumination angle, similarities between sensitivity kernels highlight the tradeoff between the parameters. To discuss the parameterization choice in the context of finite-frequency tomography, we compute the sensitivity kernels of the instantaneous traveltimes derived from the seismic data traces. We consider the transmission case with no encounter of an interface between a source and a receiver; with surface seismic data, this corresponds to a diving wave path. We also consider the diffraction/reflection case when the wave path is formed by two parts: one from the source to a sub-surface point and the other from the sub-surface point to the receiver. We illustrate the different parameter sensitivities for an acoustic transversely isotropic medium with a vertical axis of symmetry. The sensitivity kernels depend on the parameterization choice. By comparing different parameterizations, we explain why the parameterization with the normal moveout velocity, the anellipitic parameter η, and the δ parameter is attractive when we invert diving and reflected events recorded in an active surface seismic experiment. © 2016 European Association of Geoscientists & Engineers.

  12. Computer aided safety analysis

    International Nuclear Information System (INIS)

    1988-05-01

    The document reproduces 20 selected papers from the 38 papers presented at the Technical Committee/Workshop on Computer Aided Safety Analysis organized by the IAEA in co-operation with the Institute of Atomic Energy in Otwock-Swierk, Poland on 25-29 May 1987. A separate abstract was prepared for each of these 20 technical papers. Refs, figs and tabs

  13. Gene co-expression analysis identifies gene clusters associated with isotropic and polarized growth in Aspergillus fumigatus conidia.

    Science.gov (United States)

    Baltussen, Tim J H; Coolen, Jordy P M; Zoll, Jan; Verweij, Paul E; Melchers, Willem J G

    2018-04-26

    Aspergillus fumigatus is a saprophytic fungus that extensively produces conidia. These microscopic asexually reproductive structures are small enough to reach the lungs. Germination of conidia followed by hyphal growth inside human lungs is a key step in the establishment of infection in immunocompromised patients. RNA-Seq was used to analyze the transcriptome of dormant and germinating A. fumigatus conidia. Construction of a gene co-expression network revealed four gene clusters (modules) correlated with a growth phase (dormant, isotropic growth, polarized growth). Transcripts levels of genes encoding for secondary metabolites were high in dormant conidia. During isotropic growth, transcript levels of genes involved in cell wall modifications increased. Two modules encoding for growth and cell cycle/DNA processing were associated with polarized growth. In addition, the co-expression network was used to identify highly connected intermodular hub genes. These genes may have a pivotal role in the respective module and could therefore be compelling therapeutic targets. Generally, cell wall remodeling is an important process during isotropic and polarized growth, characterized by an increase of transcripts coding for hyphal growth and cell cycle/DNA processing when polarized growth is initiated. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Integrated dual-tomography for refractive index analysis of free-floating single living cell with isotropic superresolution.

    Science.gov (United States)

    B, Vinoth; Lai, Xin-Ji; Lin, Yu-Chih; Tu, Han-Yen; Cheng, Chau-Jern

    2018-04-13

    Digital holographic microtomography is a promising technique for three-dimensional (3D) measurement of the refractive index (RI) profiles of biological specimens. Measurement of the RI distribution of a free-floating single living cell with an isotropic superresolution had not previously been accomplished. To the best of our knowledge, this is the first study focusing on the development of an integrated dual-tomographic (IDT) imaging system for RI measurement of an unlabelled free-floating single living cell with an isotropic superresolution by combining the spatial frequencies of full-angle specimen rotation with those of beam rotation. A novel 'UFO' (unidentified flying object) like shaped coherent transfer function is obtained. The IDT imaging system does not require any complex image-processing algorithm for 3D reconstruction. The working principle was successfully demonstrated and a 3D RI profile of a single living cell, Candida rugosa, was obtained with an isotropic superresolution. This technology is expected to set a benchmark for free-floating single live sample measurements without labeling or any special sample preparations for the experiments.

  15. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  16. Nanocompositional Electron Microscopic Analysis and Role of Grain Boundary Phase of Isotropically Oriented Nd-Fe-B Magnets

    Directory of Open Access Journals (Sweden)

    Gregor A. Zickler

    2017-01-01

    Full Text Available Nanoanalytical TEM characterization in combination with finite element micromagnetic modelling clarifies the impact of the grain misalignment and grain boundary nanocomposition on the coercive field and gives guidelines how to improve coercivity in Nd-Fe-B based magnets. The nanoprobe electron energy loss spectroscopy measurements obtained an asymmetric composition profile of the Fe-content across the grain boundary phase in isotropically oriented melt-spun magnets and showed an enrichment of iron up to 60 at% in the Nd-containing grain boundaries close to Nd2Fe14B grain surfaces parallel to the c-axis and a reduced iron content up to 35% close to grain surfaces perpendicular to the c-axis. The numerical micromagnetic simulations on isotropically oriented magnets using realistic model structures from the TEM results reveal a complex magnetization reversal starting at the grain boundary phase and show that the coercive field increases compared to directly coupled grains with no grain boundary phase independently of the grain boundary thickness. This behaviour is contrary to the one in aligned anisotropic magnets, where the coercive field decreases compared to directly coupled grains with an increasing grain boundary thickness, if Js value is > 0.2 T, and the magnetization reversal and expansion of reversed magnetic domains primarily start as Bloch domain wall at grain boundaries at the prismatic planes parallel to the c-axis and secondly as Néel domain wall at the basal planes perpendicular to the c-axis. In summary our study shows an increase of coercive field in isotropically oriented Nd-Fe-B magnets for GB layer thickness > 5 nm and an average Js value of the GB layer < 0.8 T compared to the magnet with perfectly aligned grains.

  17. Analysis of non-axisymmetric wave propagation in a homogeneous piezoelectric solid circular cylinder of transversely isotropic material

    CSIR Research Space (South Africa)

    Shatalov, MY

    2009-01-01

    Full Text Available ). The main disadvantage of this approach is that the roots of characteristic arguments ( ( )0, 1, , 4k kξ = = … ) are also displayed on the surface plots as obvious artefacts. An elaborate discussion of these artefacts is given in Yenwong-Fai (2008...-matrix interface by guided waves: Axisymmetric case. J. Acoust. Soc. Am 89 (6), 2573-2583. Yenwong-Fai, A., 2008. Wave propagation in a piezoelectric solid cylinder of transversely isotropic material. Master’s thesis, University of Witwatersrand, Johannesburg...

  18. Isotropic damage model and serial/parallel mix theory applied to nonlinear analysis of ferrocement thin walls. Experimental and numerical analysis

    Directory of Open Access Journals (Sweden)

    Jairo A. Paredes

    2016-01-01

    Full Text Available Ferrocement thin walls are the structural elements that comprise the earthquake resistant system of dwellings built with this material. This article presents the results drawn from an experimental campaign carried out over full-scale precast ferrocement thin walls that were assessed under lateral static loading conditions. The tests allowed the identification of structural parameters and the evaluation of the performance of the walls under static loading conditions. Additionally, an isotropic damage model for modelling the mortar was applied, as well as the classic elasto-plastic theory for modelling the meshes and reinforcing bars. The ferrocement is considered as a composite material, thus the serial/parallel mix theory is used for modelling its mechanical behavior. In this work a methodology for the numerical analysis that allows modeling the nonlinear behavior exhibited by ferrocement walls under static loading conditions, as well as their potential use in earthquake resistant design, is proposed.

  19. Analysis of computer programming languages

    International Nuclear Information System (INIS)

    Risset, Claude Alain

    1967-01-01

    This research thesis aims at trying to identify some methods of syntax analysis which can be used for computer programming languages while putting aside computer devices which influence the choice of the programming language and methods of analysis and compilation. In a first part, the author proposes attempts of formalization of Chomsky grammar languages. In a second part, he studies analytical grammars, and then studies a compiler or analytic grammar for the Fortran language

  20. Isotropic oscillator: spheroidal wave functions

    International Nuclear Information System (INIS)

    Mardoyan, L.G.; Pogosyan, G.S.; Ter-Antonyan, V.M.; Sisakyan, A.N.

    1985-01-01

    Solutions of the Schroedinger equation are found for an isotropic oscillator (10) in prolate and oblate spheroidal coordinates. It is shown that the obtained solutions turn into spherical and cylindrical bases of the isotropic oscillator at R→0 and R→ infinity (R is the dimensional parameter entering into the definition of prolate and oblate spheroidal coordinates). The explicit form is given for both prolate and oblate basis of the isotropic oscillator for the lowest quantum states

  1. The isotropic Universe

    International Nuclear Information System (INIS)

    Raine, D.J.

    1981-01-01

    This introduction to contemporary ideas in cosmology differs from other books on the 'expanding Universe' in its emphasis on physical cosmology and on the physical basis of the general theory of relativity. It is considered that the remarkable degree of isotropy, rather than the expansion, can be regarded as the central observational feature of the Universe. The various theories and ideas in 'big-bang' cosmology are discussed, providing an insight into current problems. Chapter headings are: quality of matter; expanding Universe; quality of radiation; quantity of matter; general theory of relativity; cosmological models; cosmological tests; matter and radiation; limits of isotropy; why is the Universe isotropic; singularities; evolution of structure. (U.K.)

  2. Analysis of computer networks

    CERN Document Server

    Gebali, Fayez

    2015-01-01

    This textbook presents the mathematical theory and techniques necessary for analyzing and modeling high-performance global networks, such as the Internet. The three main building blocks of high-performance networks are links, switching equipment connecting the links together, and software employed at the end nodes and intermediate switches. This book provides the basic techniques for modeling and analyzing these last two components. Topics covered include, but are not limited to: Markov chains and queuing analysis, traffic modeling, interconnection networks and switch architectures and buffering strategies.   ·         Provides techniques for modeling and analysis of network software and switching equipment; ·         Discusses design options used to build efficient switching equipment; ·         Includes many worked examples of the application of discrete-time Markov chains to communication systems; ·         Covers the mathematical theory and techniques necessary for ana...

  3. Affective Computing and Sentiment Analysis

    CERN Document Server

    Ahmad, Khurshid

    2011-01-01

    This volume maps the watershed areas between two 'holy grails' of computer science: the identification and interpretation of affect -- including sentiment and mood. The expression of sentiment and mood involves the use of metaphors, especially in emotive situations. Affect computing is rooted in hermeneutics, philosophy, political science and sociology, and is now a key area of research in computer science. The 24/7 news sites and blogs facilitate the expression and shaping of opinion locally and globally. Sentiment analysis, based on text and data mining, is being used in the looking at news

  4. Analysis of the multi-component pseudo-pure-mode qP-wave inversion in vertical transverse isotropic (VTI) media

    KAUST Repository

    Djebbi, Ramzi

    2014-08-05

    Multi-parameter inversion in anisotropic media suffers from the inherent trade-off between the anisotropic parameters, even under the acoustic assumption. Multi-component data, often acquired nowadays in ocean bottom acquisition and land data, provide additional information capable of resolving anisotropic parameters under the acoustic approximation assumption. Based on Born scattering approximation, we develop formulas capable of characterizing the radiation patterns for the acoustic pseudo-pure mode P-waves. Though commonly reserved for the elastic fields, we use displacement fields to constrain the acoustic vertical transverse isotropic (VTI) representation of the medium. Using the asymptotic Green\\'s functions and a horizontal reflector we derive the radiation patterns for perturbations in the anisotropic media. The radiation pattern for the anellipticity parameter η is identically zero for the horizontal displacement. This allows us to dedicate this component to invert for velocity and δ. Computing the traveltime sensitivity kernels based on the unwrapped phase confirms the radiation patterns observations, and provide the model wavenumber behavior of the update.

  5. Progress in the analysis of non-axisymmetric wave propagation in a homogeneous solid circular cylinder of a piezoelectric transversely isotropic material

    CSIR Research Space (South Africa)

    Every, AG

    2010-01-01

    Full Text Available Non-axisymmetric waves in a free homogeneous piezoelectric cylinder of transversely isotropic material with axial polarization are investigated on the basis of the linear theory of elasticity and linear electromechanical coupling. The solution...

  6. Using Fourier and Taylor series expansion in semi-analytical deformation analysis of thick-walled isotropic and wound composite structures

    Directory of Open Access Journals (Sweden)

    Jiran L.

    2016-06-01

    Full Text Available Thick-walled tubes made from isotropic and anisotropic materials are subjected to an internal pressure while the semi-analytical method is employed to investigate their elastic deformations. The contribution and novelty of this method is that it works universally for different loads, different boundary conditions, and different geometry of analyzed structures. Moreover, even when composite material is considered, the method requires no simplistic assumptions. The method uses a curvilinear tensor calculus and it works with the analytical expression of the total potential energy while the unknown displacement functions are approximated by using appropriate series expansion. Fourier and Taylor series expansion are involved into analysis in which they are tested and compared. The main potential of the proposed method is in analyses of wound composite structures when a simple description of the geometry is made in a curvilinear coordinate system while material properties are described in their inherent Cartesian coordinate system. Validations of the introduced semi-analytical method are performed by comparing results with those obtained from three-dimensional finite element analysis (FEA. Calculations with Fourier series expansion show noticeable disagreement with results from the finite element model because Fourier series expansion is not able to capture the course of radial deformation. Therefore, it can be used only for rough estimations of a shape after deformation. On the other hand, the semi-analytical method with Fourier Taylor series expansion works very well for both types of material. Its predictions of deformations are reliable and widely exploitable.

  7. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  8. Application of group analysis to the spatially homogeneous and isotropic Boltzmann equation with source using its Fourier image

    International Nuclear Information System (INIS)

    Grigoriev, Yurii N; Meleshko, Sergey V; Suriyawichitseranee, Amornrat

    2015-01-01

    Group analysis of the spatially homogeneous and molecular energy dependent Boltzmann equations with source term is carried out. The Fourier transform of the Boltzmann equation with respect to the molecular velocity variable is considered. The correspondent determining equation of the admitted Lie group is reduced to a partial differential equation for the admitted source. The latter equation is analyzed by an algebraic method. A complete group classification of the Fourier transform of the Boltzmann equation with respect to a source function is given. The representation of invariant solutions and corresponding reduced equations for all obtained source functions are also presented. (paper)

  9. Empirical isotropic chemical shift surfaces

    International Nuclear Information System (INIS)

    Czinki, Eszter; Csaszar, Attila G.

    2007-01-01

    A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins

  10. Systems analysis and the computer

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, A S

    1983-08-01

    The words systems analysis are used in at least two senses. Whilst the general nature of the topic is well understood in the or community, the nature of the term as used by computer scientists is less familiar. In this paper, the nature of systems analysis as it relates to computer-based systems is examined from the point of view that the computer system is an automaton embedded in a human system, and some facets of this are explored. It is concluded that or analysts and computer analysts have things to learn from each other and that this ought to be reflected in their education. The important role played by change in the design of systems is also highlighted, and it is concluded that, whilst the application of techniques developed in the artificial intelligence field have considerable relevance to constructing automata able to adapt to change in the environment, study of the human factors affecting the overall systems within which the automata are embedded has an even more important role. 19 references.

  11. Computer aided safety analysis 1989

    International Nuclear Information System (INIS)

    1990-04-01

    The meeting was conducted in a workshop style, to encourage involvement of all participants during the discussions. Forty-five (45) experts from 19 countries, plus 22 experts from the GDR participated in the meeting. A list of participants can be found at the end of this volume. Forty-two (42) papers were presented and discussed during the meeting. Additionally an open discussion was held on the possible directions of the IAEA programme on Computer Aided Safety Analysis. A summary of the conclusions of these discussions is presented in the publication. The remainder of this proceedings volume comprises the transcript of selected technical papers (22) presented in the meeting. It is the intention of the IAEA that the publication of these proceedings will extend the benefits of the discussions held during the meeting to a larger audience throughout the world. The Technical Committee/Workshop on Computer Aided Safety Analysis was organized by the IAEA in cooperation with the National Board for Safety and Radiological Protection (SAAS) of the German Democratic Republic in Berlin. The purpose of the meeting was to provide an opportunity for discussions on experiences in the use of computer codes used for safety analysis of nuclear power plants. In particular it was intended to provide a forum for exchange of information among experts using computer codes for safety analysis under the Technical Cooperation Programme on Safety of WWER Type Reactors (RER/9/004) and other experts throughout the world. A separate abstract was prepared for each of the 22 selected papers. Refs, figs tabs and pictures

  12. Isotropic stars in general relativity

    International Nuclear Information System (INIS)

    Mak, M.K.; Harko, T.

    2013-01-01

    We present a general solution of the Einstein gravitational field equations for the static spherically symmetric gravitational interior space-time of an isotropic fluid sphere. The solution is obtained by transforming the pressure isotropy condition, a second order ordinary differential equation, into a Riccati type first order differential equation, and using a general integrability condition for the Riccati equation. This allows us to obtain an exact non-singular solution of the interior field equations for a fluid sphere, expressed in the form of infinite power series. The physical features of the solution are studied in detail numerically by cutting the infinite series expansions, and restricting our numerical analysis by taking into account only n=21 terms in the power series representations of the relevant astrophysical parameters. In the present model all physical quantities (density, pressure, speed of sound etc.) are finite at the center of the sphere. The physical behavior of the solution essentially depends on the equation of state of the dense matter at the center of the star. The stability properties of the model are also analyzed in detail for a number of central equations of state, and it is shown that it is stable with respect to the radial adiabatic perturbations. The astrophysical analysis indicates that this solution can be used as a realistic model for static general relativistic high density objects, like neutron stars. (orig.)

  13. Computational system for geostatistical analysis

    Directory of Open Access Journals (Sweden)

    Vendrusculo Laurimar Gonçalves

    2004-01-01

    Full Text Available Geostatistics identifies the spatial structure of variables representing several phenomena and its use is becoming more intense in agricultural activities. This paper describes a computer program, based on Windows Interfaces (Borland Delphi, which performs spatial analyses of datasets through geostatistic tools: Classical statistical calculations, average, cross- and directional semivariograms, simple kriging estimates and jackknifing calculations. A published dataset of soil Carbon and Nitrogen was used to validate the system. The system was useful for the geostatistical analysis process, for the manipulation of the computational routines in a MS-DOS environment. The Windows development approach allowed the user to model the semivariogram graphically with a major degree of interaction, functionality rarely available in similar programs. Given its characteristic of quick prototypation and simplicity when incorporating correlated routines, the Delphi environment presents the main advantage of permitting the evolution of this system.

  14. A computational clonal analysis of the developing mouse limb bud.

    Directory of Open Access Journals (Sweden)

    Luciano Marcon

    Full Text Available A comprehensive spatio-temporal description of the tissue movements underlying organogenesis would be an extremely useful resource to developmental biology. Clonal analysis and fate mappings are popular experiments to study tissue movement during morphogenesis. Such experiments allow cell populations to be labeled at an early stage of development and to follow their spatial evolution over time. However, disentangling the cumulative effects of the multiple events responsible for the expansion of the labeled cell population is not always straightforward. To overcome this problem, we develop a novel computational method that combines accurate quantification of 2D limb bud morphologies and growth modeling to analyze mouse clonal data of early limb development. Firstly, we explore various tissue movements that match experimental limb bud shape changes. Secondly, by comparing computational clones with newly generated mouse clonal data we are able to choose and characterize the tissue movement map that better matches experimental data. Our computational analysis produces for the first time a two dimensional model of limb growth based on experimental data that can be used to better characterize limb tissue movement in space and time. The model shows that the distribution and shapes of clones can be described as a combination of anisotropic growth with isotropic cell mixing, without the need for lineage compartmentalization along the AP and PD axis. Lastly, we show that this comprehensive description can be used to reassess spatio-temporal gene regulations taking tissue movement into account and to investigate PD patterning hypothesis.

  15. Computational analysis of cerebral cortex

    Energy Technology Data Exchange (ETDEWEB)

    Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan)

    2010-08-15

    Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)

  16. Computational analysis of cerebral cortex

    International Nuclear Information System (INIS)

    Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni

    2010-01-01

    Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)

  17. Three-dimensional analysis of cellular microstructures by computer simulation

    International Nuclear Information System (INIS)

    Hanson, K.; Morris, J.W. Jr.

    1977-06-01

    For microstructures of the ''cellular'' type (isotropic growth from a distribution of nuclei which form simultaneously), it is possible to construct an efficient code which will completely analyze the microstructure in three dimensions. Such a computer code for creating and storing the connected graph was constructed

  18. Computer aided analysis of disturbances

    International Nuclear Information System (INIS)

    Baldeweg, F.; Lindner, A.

    1986-01-01

    Computer aided analysis of disturbances and the prevention of failures (diagnosis and therapy control) in technological plants belong to the most important tasks of process control. Research in this field is very intensive due to increasing requirements to security and economy of process control and due to a remarkable increase of the efficiency of digital electronics. This publication concerns with analysis of disturbances in complex technological plants, especially in so called high risk processes. The presentation emphasizes theoretical concept of diagnosis and therapy control, modelling of the disturbance behaviour of the technological process and the man-machine-communication integrating artificial intelligence methods, e.g., expert system approach. Application is given for nuclear power plants. (author)

  19. Isotropic Broadband E-Field Probe

    Directory of Open Access Journals (Sweden)

    Béla Szentpáli

    2008-01-01

    Full Text Available An E-field probe has been developed for EMC immunity tests performed in closed space. The leads are flexible resistive transmission lines. Their influence on the field distribution is negligible. The probe has an isotropic reception from 100 MHz to 18 GHz; the sensitivity is in the 3 V/m–10 V/m range. The device is an accessory of the EMC test chamber. The readout of the field magnitude is carried out by personal computer, which fulfils also the required corrections of the raw data.

  20. Crack Tip Creep Deformation Behavior in Transversely Isotropic Materials

    International Nuclear Information System (INIS)

    Ma, Young Wha; Yoon, Kee Bong

    2009-01-01

    Theoretical mechanics analysis and finite element simulation were performed to investigate creep deformation behavior at the crack tip of transversely isotropic materials under small scale creep (SCC) conditions. Mechanical behavior of material was assumed as an elastic-2 nd creep, which elastic modulus ( E ), Poisson's ratio (v ) and creep stress exponent ( n ) were isotropic and creep coefficient was only transversely isotropic. Based on the mechanics analysis for material behavior, a constitutive equation for transversely isotropic creep behavior was formulated and an equivalent creep coefficient was proposed under plain strain conditions. Creep deformation behavior at the crack tip was investigated through the finite element analysis. The results of the finite element analysis showed that creep deformation in transversely isotropic materials is dominant at the rear of the crack-tip. This result was more obvious when a load was applied to principal axis of anisotropy. Based on the results of the mechanics analysis and the finite element simulation, a corrected estimation scheme of the creep zone size was proposed in order to evaluate the creep deformation behavior at the crack tip of transversely isotropic creeping materials

  1. Ellipsoidal basis for isotropic oscillator

    International Nuclear Information System (INIS)

    Kallies, W.; Lukac, I.; Pogosyan, G.S.; Sisakyan, A.N.

    1994-01-01

    The solutions of the Schroedinger equation are derived for the isotropic oscillator potential in the ellipsoidal coordinate system. The explicit expression is obtained for the ellipsoidal integrals of motion through the components of the orbital moment and Demkov's tensor. The explicit form of the ellipsoidal basis is given for the lowest quantum numbers. 10 refs.; 1 tab. (author)

  2. Circular random motion in diatom gliding under isotropic conditions

    International Nuclear Information System (INIS)

    Gutiérrez-Medina, Braulio; Maldonado, Ana Iris Peña; Guerra, Andrés Jiménez; Rubio, Yadiralia Covarrubias; Meza, Jessica Viridiana García

    2014-01-01

    How cells migrate has been investigated primarily for the case of trajectories composed by joined straight segments. In contrast, little is known when cellular motion follows intrinsically curved paths. Here, we use time-lapse optical microscopy and automated trajectory tracking to investigate how individual cells of the diatom Nitzschia communis glide across surfaces under isotropic environmental conditions. We find a distinct kind of random motion, where trajectories are formed by circular arcs traveled at constant speed, alternated with random stoppages, direction reversals and changes in the orientation of the arcs. Analysis of experimental and computer-simulated trajectories show that the circular random motion of diatom gliding is not optimized for long-distance travel but rather for recurrent coverage of limited surface area. These results suggest that one main biological role for this type of diatom motility is to efficiently build the foundation of algal biofilms. (paper)

  3. Personal Computer Transport Analysis Program

    Science.gov (United States)

    DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter

    2012-01-01

    The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.

  4. Piping stress analysis with personal computers

    International Nuclear Information System (INIS)

    Revesz, Z.

    1987-01-01

    The growing market of the personal computers is providing an increasing number of professionals with unprecedented and surprisingly inexpensive computing capacity, which if using with powerful software, can enhance immensely the engineers capabilities. This paper focuses on the possibilities which opened in piping stress analysis by the widespread distribution of personal computers, on the necessary changes in the software and on the limitations of using personal computers for engineering design and analysis. Reliability and quality assurance aspects of using personal computers for nuclear applications are also mentioned. The paper resumes with personal views of the author and experiences gained during interactive graphic piping software development for personal computers. (orig./GL)

  5. Computer-Based Linguistic Analysis.

    Science.gov (United States)

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  6. Active isotropic slabs: conditions for amplified reflection

    Science.gov (United States)

    Perez, Liliana I.; Matteo, Claudia L.; Etcheverry, Javier; Duplaá, María Celeste

    2012-12-01

    We analyse in detail the necessary conditions to obtain amplified reflection (AR) in isotropic interfaces when a plane wave propagates from a transparent medium towards an active one. First, we demonstrate analytically that AR is not possible if a single interface is involved. Then, we study the conditions for AR in a very simple configuration: normal incidence on an active slab immersed in transparent media. Finally, we develop an analysis in the complex plane in order to establish a geometrical method that not only describes the behaviour of active slabs but also helps to simplify the calculus.

  7. Active isotropic slabs: conditions for amplified reflection

    International Nuclear Information System (INIS)

    Perez, Liliana I; Duplaá, María Celeste; Matteo, Claudia L; Etcheverry, Javier

    2012-01-01

    We analyse in detail the necessary conditions to obtain amplified reflection (AR) in isotropic interfaces when a plane wave propagates from a transparent medium towards an active one. First, we demonstrate analytically that AR is not possible if a single interface is involved. Then, we study the conditions for AR in a very simple configuration: normal incidence on an active slab immersed in transparent media. Finally, we develop an analysis in the complex plane in order to establish a geometrical method that not only describes the behaviour of active slabs but also helps to simplify the calculus. (paper)

  8. Investigating source processes of isotropic events

    Science.gov (United States)

    Chiang, Andrea

    explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve

  9. Steady- and transient-state analysis of fully ceramic microencapsulated fuel with randomly dispersed tristructural isotropic particles via two-temperature homogenized model-I: Theory and method

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin

    2016-01-01

    As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM) fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC) matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1) matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2) preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1) they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2) they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained

  10. Transportation Research & Analysis Computing Center

    Data.gov (United States)

    Federal Laboratory Consortium — The technical objectives of the TRACC project included the establishment of a high performance computing center for use by USDOT research teams, including those from...

  11. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  12. Batch Computed Tomography Analysis of Projectiles

    Science.gov (United States)

    2016-05-01

    ARL-TR-7681 ● MAY 2016 US Army Research Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt, Chris M...Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt and Matthew S Bratcher Weapons and Materials Research...values to account for projectile variability in the ballistic evaluation of armor. 15. SUBJECT TERMS computed tomography , CT, BS41, projectiles

  13. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  14. Induced piezoelectricity in isotropic biomaterial.

    Science.gov (United States)

    Zimmerman, R L

    1976-01-01

    Isotropic material can be made to exhibit piezoelectric effects by the application of a constant electric field. For insulators, the piezoelectric strain constant is proportional to the applied electric field and for semiconductors, an additional out-of-phase component of piezoelectricity is proportional to the electric current density in the sample. The two induced coefficients are proportional to the strain-dependent dielectric constant (depsilon/dS + epsilon) and resistivity (drho/dS - rho), respectively. The latter is more important at frequencies such that rhoepsilonomega less than 1, often the case in biopolymers.Signals from induced piezoelectricity in nature may be larger than those from true piezoelectricity. PMID:990389

  15. How Isotropic is the Universe?

    Science.gov (United States)

    Saadeh, Daniela; Feeney, Stephen M; Pontzen, Andrew; Peiris, Hiranya V; McEwen, Jason D

    2016-09-23

    A fundamental assumption in the standard model of cosmology is that the Universe is isotropic on large scales. Breaking this assumption leads to a set of solutions to Einstein's field equations, known as Bianchi cosmologies, only a subset of which have ever been tested against data. For the first time, we consider all degrees of freedom in these solutions to conduct a general test of isotropy using cosmic microwave background temperature and polarization data from Planck. For the vector mode (associated with vorticity), we obtain a limit on the anisotropic expansion of (σ_{V}/H)_{0}Universe is strongly disfavored, with odds of 121 000:1 against.

  16. Steady- and transient-state analysis of fully ceramic microencapsulated fuel with randomly dispersed tristructural isotropic particles via two-temperature homogenized model-II: Applications by coupling with COREDAX

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin

    2016-01-01

    In Part I of this paper, the two-temperature homogenized model for the fully ceramic microencapsulated fuel, in which tristructural isotropic particles are randomly dispersed in a fine lattice stochastic structure, was discussed. In this model, the fuel-kernel and silicon carbide matrix temperatures are distinguished. Moreover, the obtained temperature profiles are more realistic than those obtained using other models. Using the temperature-dependent thermal conductivities of uranium nitride and the silicon carbide matrix, temperature-dependent homogenized parameters were obtained. In Part II of the paper, coupled with the COREDAX code, a reactor core loaded by fully ceramic microencapsulated fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure is analyzed via a two-temperature homogenized model at steady and transient states. The results are compared with those from harmonic- and volumetric-average thermal conductivity models; i.e., we compare keff eigenvalues, power distributions, and temperature profiles in the hottest single channel at a steady state. At transient states, we compare total power, average energy deposition, and maximum temperatures in the hottest single channel obtained by the different thermal analysis models. The different thermal analysis models and the availability of fuel-kernel temperatures in the two-temperature homogenized model for Doppler temperature feedback lead to significant differences

  17. Isotropic Negative Thermal Expansion Metamaterials.

    Science.gov (United States)

    Wu, Lingling; Li, Bo; Zhou, Ji

    2016-07-13

    Negative thermal expansion materials are important and desirable in science and engineering applications. However, natural materials with isotropic negative thermal expansion are rare and usually unsatisfied in performance. Here, we propose a novel method to achieve two- and three-dimensional negative thermal expansion metamaterials via antichiral structures. The two-dimensional metamaterial is constructed with unit cells that combine bimaterial strips and antichiral structures, while the three-dimensional metamaterial is fabricated by a multimaterial 3D printing process. Both experimental and simulation results display isotropic negative thermal expansion property of the samples. The effective coefficient of negative thermal expansion of the proposed models is demonstrated to be dependent on the difference between the thermal expansion coefficient of the component materials, as well as on the circular node radius and the ligament length in the antichiral structures. The measured value of the linear negative thermal expansion coefficient of the three-dimensional sample is among the largest achieved in experiments to date. Our findings provide an easy and practical approach to obtaining materials with tunable negative thermal expansion on any scale.

  18. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  19. IUE Data Analysis Software for Personal Computers

    Science.gov (United States)

    Thompson, R.; Caplinger, J.; Taylor, L.; Lawton , P.

    1996-01-01

    This report summarizes the work performed for the program titled, "IUE Data Analysis Software for Personal Computers" awarded under Astrophysics Data Program NRA 92-OSSA-15. The work performed was completed over a 2-year period starting in April 1994. As a result of the project, 450 IDL routines and eight database tables are now available for distribution for Power Macintosh computers and Personal Computers running Windows 3.1.

  20. Thermalization vs. isotropization and azimuthal fluctuations

    International Nuclear Information System (INIS)

    Mrowczynski, Stanislaw

    2005-01-01

    Hydrodynamic description requires a local thermodynamic equilibrium of the system under study but an approximate hydrodynamic behaviour is already manifested when a momentum distribution of liquid components is not of equilibrium form but merely isotropic. While the process of equilibration is relatively slow, the parton system becomes isotropic rather fast due to the plasma instabilities. Azimuthal fluctuations observed in relativistic heavy-ion collisions are argued to distinguish between a fully equilibrated and only isotropic parton system produced in the collision early stage

  1. Computational methods for corpus annotation and analysis

    CERN Document Server

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  2. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  3. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  4. A computational description of simple mediation analysis

    Directory of Open Access Journals (Sweden)

    Caron, Pier-Olivier

    2018-04-01

    Full Text Available Simple mediation analysis is an increasingly popular statistical analysis in psychology and in other social sciences. However, there is very few detailed account of the computations within the model. Articles are more often focusing on explaining mediation analysis conceptually rather than mathematically. Thus, the purpose of the current paper is to introduce the computational modelling within simple mediation analysis accompanied with examples with R. Firstly, mediation analysis will be described. Then, the method to simulate data in R (with standardized coefficients will be presented. Finally, the bootstrap method, the Sobel test and the Baron and Kenny test all used to evaluate mediation (i.e., indirect effect will be developed. The R code to implement the computation presented is offered as well as a script to carry a power analysis and a complete example.

  5. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  6. Computer assisted functional analysis. Computer gestuetzte funktionelle Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, H A.E.; Roesler, H

    1982-01-01

    The latest developments in computer-assisted functional analysis (CFA) in nuclear medicine are presented in about 250 papers of the 19th international annual meeting of the Society of Nuclear Medicine (Bern, September 1981). Apart from the mathematical and instrumental aspects of CFA, computerized emission tomography is given particular attention. Advances in nuclear medical diagnosis in the fields of radiopharmaceuticals, cardiology, angiology, neurology, ophthalmology, pulmonology, gastroenterology, nephrology, endocrinology, oncology and osteology are discussed.

  7. Isotropic quantum walks on lattices and the Weyl equation

    Science.gov (United States)

    D'Ariano, Giacomo Mauro; Erba, Marco; Perinotti, Paolo

    2017-12-01

    We present a thorough classification of the isotropic quantum walks on lattices of dimension d =1 ,2 ,3 with a coin system of dimension s =2 . For d =3 there exist two isotropic walks, namely, the Weyl quantum walks presented in the work of D'Ariano and Perinotti [G. M. D'Ariano and P. Perinotti, Phys. Rev. A 90, 062106 (2014), 10.1103/PhysRevA.90.062106], resulting in the derivation of the Weyl equation from informational principles. The present analysis, via a crucial use of isotropy, is significantly shorter and avoids a superfluous technical assumption, making the result completely general.

  8. DFT computational analysis of piracetam

    Science.gov (United States)

    Rajesh, P.; Gunasekaran, S.; Seshadri, S.; Gnanasambandan, T.

    2014-11-01

    Density functional theory calculation with B3LYP using 6-31G(d,p) and 6-31++G(d,p) basis set have been used to determine ground state molecular geometries. The first order hyperpolarizability (β0) and related properties (β, α0 and Δα) of piracetam is calculated using B3LYP/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO/NLMO analysis. The calculation of first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. Molecular electrostatic potential (MEP) at a point in the space around a molecule gives an indication of the net electrostatic effect produced at that point by the total charge distribution of the molecule. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charge is also calculated. Because of vibrational analysis, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-Vis spectra and electronic absorption properties are explained and illustrated from the frontier molecular orbitals.

  9. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  10. Turbo Pascal Computer Code for PIXE Analysis

    International Nuclear Information System (INIS)

    Darsono

    2002-01-01

    To optimal utilization of the 150 kV ion accelerator facilities and to govern the analysis technique using ion accelerator, the research and development of low energy PIXE technology has been done. The R and D for hardware of the low energy PIXE installation in P3TM have been carried on since year 2000. To support the R and D of PIXE accelerator facilities in harmonize with the R and D of the PIXE hardware, the development of PIXE software for analysis is also needed. The development of database of PIXE software for analysis using turbo Pascal computer code is reported in this paper. This computer code computes the ionization cross-section, the fluorescence yield, and the stopping power of elements also it computes the coefficient attenuation of X- rays energy. The computer code is named PIXEDASIS and it is part of big computer code planed for PIXE analysis that will be constructed in the near future. PIXEDASIS is designed to be communicative with the user. It has the input from the keyboard. The output shows in the PC monitor, which also can be printed. The performance test of the PIXEDASIS shows that it can be operated well and it can provide data agreement with data form other literatures. (author)

  11. Macroscopic simulation of isotropic permanent magnets

    International Nuclear Information System (INIS)

    Bruckner, Florian; Abert, Claas; Vogler, Christoph; Heinrichs, Frank; Satz, Armin; Ausserlechner, Udo; Binder, Gernot; Koeck, Helmut; Suess, Dieter

    2016-01-01

    Accurate simulations of isotropic permanent magnets require to take the magnetization process into account and consider the anisotropic, nonlinear, and hysteretic material behaviour near the saturation configuration. An efficient method for the solution of the magnetostatic Maxwell equations including the description of isotropic permanent magnets is presented. The algorithm can easily be implemented on top of existing finite element methods and does not require a full characterization of the hysteresis of the magnetic material. Strayfield measurements of an isotropic permanent magnet and simulation results are in good agreement and highlight the importance of a proper description of the isotropic material. - Highlights: • Simulations of isotropic permanent magnets. • Accurate calculation of remanence magnetization and strayfield. • Comparison with strayfield measurements and anisotropic magnet simulations. • Efficient 3D FEM–BEM coupling for solution of Maxwell equations.

  12. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  13. Computer graphics in reactor safety analysis

    International Nuclear Information System (INIS)

    Fiala, C.; Kulak, R.F.

    1989-01-01

    This paper describes a family of three computer graphics codes designed to assist the analyst in three areas: the modelling of complex three-dimensional finite element models of reactor structures; the interpretation of computational results; and the reporting of the results of numerical simulations. The purpose and key features of each code are presented. The graphics output used in actual safety analysis are used to illustrate the capabilities of each code. 5 refs., 10 figs

  14. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  15. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  16. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  17. A computer program for activation analysis

    International Nuclear Information System (INIS)

    Rantanen, J.; Rosenberg, R.J.

    1983-01-01

    A computer program for calculating the results of activation analysis is described. The program comprises two gamma spectrum analysis programs, STOAV and SAMPO and one program for calculating elemental concentrations, KVANT. STOAV is based on a simple summation of channels and SAMPO is based on fitting of mathematical functions. The programs are tested by analyzing the IAEA G-1 test spectra. In the determination of peak location SAMPO is somewhat better than STOAV and in the determination of peak area SAMPO is more than twice as accurate as STOAV. On the other hand, SAMPO is three times as expensive as STOAV with the use of a Cyber 170 computer. (author)

  18. Safety analysis of control rod drive computers

    International Nuclear Information System (INIS)

    Ehrenberger, W.; Rauch, G.; Schmeil, U.; Maertz, J.; Mainka, E.U.; Nordland, O.; Gloee, G.

    1985-01-01

    The analysis of the most significant user programmes revealed no errors in these programmes. The evaluation of approximately 82 cumulated years of operation demonstrated that the operating system of the control rod positioning processor has a reliability that is sufficiently good for the tasks this computer has to fulfil. Computers can be used for safety relevant tasks. The experience gained with the control rod positioning processor confirms that computers are not less reliable than conventional instrumentation and control system for comparable tasks. The examination and evaluation of computers for safety relevant tasks can be done with programme analysis or statistical evaluation of the operating experience. Programme analysis is recommended for seldom used and well structured programmes. For programmes with a long, cumulated operating time a statistical evaluation is more advisable. The effort for examination and evaluation is not greater than the corresponding effort for conventional instrumentation and control systems. This project has also revealed that, where it is technologically sensible, process controlling computers or microprocessors can be qualified for safety relevant tasks without undue effort. (orig./HP) [de

  19. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  20. Surface computing and collaborative analysis work

    CERN Document Server

    Brown, Judith; Gossage, Stevenson; Hack, Chris

    2013-01-01

    Large surface computing devices (wall-mounted or tabletop) with touch interfaces and their application to collaborative data analysis, an increasingly important and prevalent activity, is the primary topic of this book. Our goals are to outline the fundamentals of surface computing (a still maturing technology), review relevant work on collaborative data analysis, describe frameworks for understanding collaborative processes, and provide a better understanding of the opportunities for research and development. We describe surfaces as display technologies with which people can interact directly, and emphasize how interaction design changes when designing for large surfaces. We review efforts to use large displays, surfaces or mixed display environments to enable collaborative analytic activity. Collaborative analysis is important in many domains, but to provide concrete examples and a specific focus, we frequently consider analysis work in the security domain, and in particular the challenges security personne...

  1. Computer-assisted qualitative data analysis software.

    Science.gov (United States)

    Cope, Diane G

    2014-05-01

    Advances in technology have provided new approaches for data collection methods and analysis for researchers. Data collection is no longer limited to paper-and-pencil format, and numerous methods are now available through Internet and electronic resources. With these techniques, researchers are not burdened with entering data manually and data analysis is facilitated by software programs. Quantitative research is supported by the use of computer software and provides ease in the management of large data sets and rapid analysis of numeric statistical methods. New technologies are emerging to support qualitative research with the availability of computer-assisted qualitative data analysis software (CAQDAS).CAQDAS will be presented with a discussion of advantages, limitations, controversial issues, and recommendations for this type of software use.

  2. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  3. Computation for the analysis of designed experiments

    CERN Document Server

    Heiberger, Richard

    2015-01-01

    Addresses the statistical, mathematical, and computational aspects of the construction of packages and analysis of variance (ANOVA) programs. Includes a disk at the back of the book that contains all program codes in four languages, APL, BASIC, C, and FORTRAN. Presents illustrations of the dual space geometry for all designs, including confounded designs.

  4. Computational analysis of a multistage axial compressor

    Science.gov (United States)

    Mamidoju, Chaithanya

    Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.

  5. Computation system for nuclear reactor core analysis

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.

    1977-04-01

    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals

  6. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  7. Plasma geometric optics analysis and computation

    International Nuclear Information System (INIS)

    Smith, T.M.

    1983-01-01

    Important practical applications in the generation, manipulation, and diagnosis of laboratory thermonuclear plasmas have created a need for elaborate computational capabilities in the study of high frequency wave propagation in plasmas. A reduced description of such waves suitable for digital computation is provided by the theory of plasma geometric optics. The existing theory is beset by a variety of special cases in which the straightforward analytical approach fails, and has been formulated with little attention to problems of numerical implementation of that analysis. The standard field equations are derived for the first time from kinetic theory. A discussion of certain terms previously, and erroneously, omitted from the expansion of the plasma constitutive relation is given. A powerful but little known computational prescription for determining the geometric optics field in the neighborhood of caustic singularities is rigorously developed, and a boundary layer analysis for the asymptotic matching of the plasma geometric optics field across caustic singularities is performed for the first time with considerable generality. A proper treatment of birefringence is detailed, wherein a breakdown of the fundamental perturbation theory is identified and circumvented. A general ray tracing computer code suitable for applications to radiation heating and diagnostic problems is presented and described

  8. Analysis of electronic circuits using digital computers

    International Nuclear Information System (INIS)

    Tapu, C.

    1968-01-01

    Various programmes have been proposed for studying electronic circuits with the help of computers. It is shown here how it possible to use the programme ECAP, developed by I.B.M., for studying the behaviour of an operational amplifier from different point of view: direct current, alternating current and transient state analysis, optimisation of the gain in open loop, study of the reliability. (author) [fr

  9. Geometrical considerations in analyzing isotropic or anisotropic surface reflections.

    Science.gov (United States)

    Simonot, Lionel; Obein, Gael

    2007-05-10

    The bidirectional reflectance distribution function (BRDF) represents the evolution of the reflectance with the directions of incidence and observation. Today BRDF measurements are increasingly applied and have become important to the study of the appearance of surfaces. The representation and the analysis of BRDF data are discussed, and the distortions caused by the traditional representation of the BRDF in a Fourier plane are pointed out and illustrated for two theoretical cases: an isotropic surface and a brushed surface. These considerations will help characterize either the specular peak width of an isotropic rough surface or the main directions of the light scattered by an anisotropic rough surface without misinterpretations. Finally, what is believed to be a new space is suggested for the representation of the BRDF, which avoids the geometrical deformations and in numerous cases is more convenient for BRDF analysis.

  10. Quantitative time domain analysis of lifetime-based Förster resonant energy transfer measurements with fluorescent proteins: Static random isotropic fluorophore orientation distributions

    DEFF Research Database (Denmark)

    Alexandrov, Yuriy; Nikolic, Dino Solar; Dunsby, Christopher

    2018-01-01

    Förster resonant energy transfer (FRET) measurements are widely used to obtain information about molecular interactions and conformations through the dependence of FRET efficiency on the proximity of donor and acceptor fluorophores. Fluorescence lifetime measurements can provide quantitative...... into new software for fitting donor emission decay profiles. Calculated FRET parameters, including molar population fractions, are compared for the analysis of simulated and experimental FRET data under the assumption of static and dynamic fluorophores and the intermediate regimes between fully dynamic...... analysis of FRET efficiency and interacting population fraction. Many FRET experiments exploit the highly specific labelling of genetically expressed fluorescent proteins, applicable in live cells and organisms. Unfortunately, the typical assumption of fast randomization of fluorophore orientations...

  11. Computational Chemical Synthesis Analysis and Pathway Design

    Directory of Open Access Journals (Sweden)

    Fan Feng

    2018-06-01

    Full Text Available With the idea of retrosynthetic analysis, which was raised in the 1960s, chemical synthesis analysis and pathway design have been transformed from a complex problem to a regular process of structural simplification. This review aims to summarize the developments of computer-assisted synthetic analysis and design in recent years, and how machine-learning algorithms contributed to them. LHASA system started the pioneering work of designing semi-empirical reaction modes in computers, with its following rule-based and network-searching work not only expanding the databases, but also building new approaches to indicating reaction rules. Programs like ARChem Route Designer replaced hand-coded reaction modes with automatically-extracted rules, and programs like Chematica changed traditional designing into network searching. Afterward, with the help of machine learning, two-step models which combine reaction rules and statistical methods became the main stream. Recently, fully data-driven learning methods using deep neural networks which even do not require any prior knowledge, were applied into this field. Up to now, however, these methods still cannot replace experienced human organic chemists due to their relatively low accuracies. Future new algorithms with the aid of powerful computational hardware will make this topic promising and with good prospects.

  12. Radiation statistics in homogeneous isotropic turbulence

    International Nuclear Information System (INIS)

    Da Silva, C B; Coelho, P J; Malico, I

    2009-01-01

    An analysis of turbulence-radiation interaction (TRI) in statistically stationary (forced) homogeneous and isotropic turbulence is presented. A direct numerical simulation code was used to generate instantaneous turbulent scalar fields, and the radiative transfer equation (RTE) was solved to provide statistical data relevant in TRI. The radiation intensity is non-Gaussian and is not spatially correlated with any of the other turbulence or radiation quantities. Its power spectrum exhibits a power-law region with a slope steeper than the classical -5/3 law. The moments of the radiation intensity, Planck-mean and incident-mean absorption coefficients, and emission and absorption TRI correlations are calculated. The influence of the optical thickness of the medium, mean and variance of the temperature and variance of the molar fraction of the absorbing species is studied. Predictions obtained from the time-averaged RTE are also included. It was found that while turbulence yields an increase of the mean blackbody radiation intensity, it causes a decrease of the time-averaged Planck-mean absorption coefficient. The absorption coefficient self-correlation is small in comparison with the temperature self-correlation, and the role of TRI in radiative emission is more important than in radiative absorption. The absorption coefficient-radiation intensity correlation is small, which supports the optically thin fluctuation approximation, and justifies the good predictions often achieved using the time-averaged RTE.

  13. Radiation statistics in homogeneous isotropic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Da Silva, C B; Coelho, P J [Mechanical Engineering Department, IDMEC/LAETA, Instituto Superior Tecnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Malico, I [Physics Department, University of Evora, Rua Romao Ramalho, 59, 7000-671 Evora (Portugal)], E-mail: carlos.silva@ist.utl.pt, E-mail: imbm@uevora.pt, E-mail: pedro.coelho@ist.utl.pt

    2009-09-15

    An analysis of turbulence-radiation interaction (TRI) in statistically stationary (forced) homogeneous and isotropic turbulence is presented. A direct numerical simulation code was used to generate instantaneous turbulent scalar fields, and the radiative transfer equation (RTE) was solved to provide statistical data relevant in TRI. The radiation intensity is non-Gaussian and is not spatially correlated with any of the other turbulence or radiation quantities. Its power spectrum exhibits a power-law region with a slope steeper than the classical -5/3 law. The moments of the radiation intensity, Planck-mean and incident-mean absorption coefficients, and emission and absorption TRI correlations are calculated. The influence of the optical thickness of the medium, mean and variance of the temperature and variance of the molar fraction of the absorbing species is studied. Predictions obtained from the time-averaged RTE are also included. It was found that while turbulence yields an increase of the mean blackbody radiation intensity, it causes a decrease of the time-averaged Planck-mean absorption coefficient. The absorption coefficient self-correlation is small in comparison with the temperature self-correlation, and the role of TRI in radiative emission is more important than in radiative absorption. The absorption coefficient-radiation intensity correlation is small, which supports the optically thin fluctuation approximation, and justifies the good predictions often achieved using the time-averaged RTE.

  14. Dynamic analysis of isotropic nanoplates subjected to moving load using state-space method based on nonlocal second order plate theory

    Energy Technology Data Exchange (ETDEWEB)

    Nami, Mohammad Rahim [Shiraz University, Shiraz, Iran (Iran, Islamic Republic of); Janghorban, Maziar [Islamic Azad University, Marvdash (Iran, Islamic Republic of)

    2015-06-15

    In this work, dynamic analysis of rectangular nanoplates subjected to moving load is presented. In order to derive the governing equations of motion, second order plate theory is used. To capture the small scale effects, the nonlocal elasticity theory is adopted. It is assumed that the nanoplate is subjected to a moving concentrated load with the constant velocity V in the x direction. To solve the governing equations, state-space method is used to find the deflections of rectangular nanoplate under moving load. The results obtained here reveal that the nonlocality has significant effect on the deflection of rectangular nanoplate subjected to moving load.

  15. CMS Computing Software and Analysis Challenge 2006

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [Dipartimento interateneo di Fisica M. Merlin and INFN Bari, Via Amendola 173, 70126 Bari (Italy)

    2007-10-15

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work.

  16. CMS Computing Software and Analysis Challenge 2006

    International Nuclear Information System (INIS)

    De Filippis, N.

    2007-01-01

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work

  17. Geometrically nonlinear dynamic analysis of doubly curved isotropic shells resting on elastic foundation by a combination of harmonic differential quadrature-finite difference methods

    International Nuclear Information System (INIS)

    Civalek, Oemer

    2005-01-01

    The nonlinear dynamic response of doubly curved shallow shells resting on Winkler-Pasternak elastic foundation has been studied for step and sinusoidal loadings. Dynamic analogues of Von Karman-Donnel type shell equations are used. Clamped immovable and simply supported immovable boundary conditions are considered. The governing nonlinear partial differential equations of the shell are discretized in space and time domains using the harmonic differential quadrature (HDQ) and finite differences (FD) methods, respectively. The accuracy of the proposed HDQ-FD coupled methodology is demonstrated by numerical examples. The shear parameter G of the Pasternak foundation and the stiffness parameter K of the Winkler foundation have been found to have a significant influence on the dynamic response of the shell. It is concluded from the present study that the HDQ-FD methodolgy is a simple, efficient, and accurate method for the nonlinear analysis of doubly curved shallow shells resting on two-parameter elastic foundation

  18. Isotropic Growth of Graphene toward Smoothing Stitching.

    Science.gov (United States)

    Zeng, Mengqi; Tan, Lifang; Wang, Lingxiang; Mendes, Rafael G; Qin, Zhihui; Huang, Yaxin; Zhang, Tao; Fang, Liwen; Zhang, Yanfeng; Yue, Shuanglin; Rümmeli, Mark H; Peng, Lianmao; Liu, Zhongfan; Chen, Shengli; Fu, Lei

    2016-07-26

    The quality of graphene grown via chemical vapor deposition still has very great disparity with its theoretical property due to the inevitable formation of grain boundaries. The design of single-crystal substrate with an anisotropic twofold symmetry for the unidirectional alignment of graphene seeds would be a promising way for eliminating the grain boundaries at the wafer scale. However, such a delicate process will be easily terminated by the obstruction of defects or impurities. Here we investigated the isotropic growth behavior of graphene single crystals via melting the growth substrate to obtain an amorphous isotropic surface, which will not offer any specific grain orientation induction or preponderant growth rate toward a certain direction in the graphene growth process. The as-obtained graphene grains are isotropically round with mixed edges that exhibit high activity. The orientation of adjacent grains can be easily self-adjusted to smoothly match each other over a liquid catalyst with facile atom delocalization due to the low rotation steric hindrance of the isotropic grains, thus achieving the smoothing stitching of the adjacent graphene. Therefore, the adverse effects of grain boundaries will be eliminated and the excellent transport performance of graphene will be more guaranteed. What is more, such an isotropic growth mode can be extended to other types of layered nanomaterials such as hexagonal boron nitride and transition metal chalcogenides for obtaining large-size intrinsic film with low defect.

  19. In-Situ Characterization of Isotropic and Transversely Isotropic Elastic Properties Using Ultrasonic Wave Velocities

    NARCIS (Netherlands)

    Pant, S; Laliberte, J; Martinez, M.J.; Rocha, B.

    2016-01-01

    In this paper, a one-sided, in situ method based on the time of flight measurement of ultrasonic waves was described. The primary application of this technique was to non-destructively measure the stiffness properties of isotropic and transversely isotropic materials. The method consists of

  20. Introduction to scientific computing and data analysis

    CERN Document Server

    Holmes, Mark H

    2016-01-01

    This textbook provides and introduction to numerical computing and its applications in science and engineering. The topics covered include those usually found in an introductory course, as well as those that arise in data analysis. This includes optimization and regression based methods using a singular value decomposition. The emphasis is on problem solving, and there are numerous exercises throughout the text concerning applications in engineering and science. The essential role of the mathematical theory underlying the methods is also considered, both for understanding how the method works, as well as how the error in the computation depends on the method being used. The MATLAB codes used to produce most of the figures and data tables in the text are available on the author’s website and SpringerLink.

  1. Aerodynamic analysis of Pegasus - Computations vs reality

    Science.gov (United States)

    Mendenhall, Michael R.; Lesieutre, Daniel J.; Whittaker, C. H.; Curry, Robert E.; Moulton, Bryan

    1993-01-01

    Pegasus, a three-stage, air-launched, winged space booster was developed to provide fast and efficient commercial launch services for small satellites. The aerodynamic design and analysis of Pegasus was conducted without benefit of wind tunnel tests using only computational aerodynamic and fluid dynamic methods. Flight test data from the first two operational flights of Pegasus are now available, and they provide an opportunity to validate the accuracy of the predicted pre-flight aerodynamic characteristics. Comparisons of measured and predicted flight characteristics are presented and discussed. Results show that the computational methods provide reasonable aerodynamic design information with acceptable margins. Post-flight analyses illustrate certain areas in which improvements are desired.

  2. Isotropic-nematic transition in a mixture of hard spheres and hard spherocylinders: scaled particle theory description

    Directory of Open Access Journals (Sweden)

    M.F. Holovko

    2017-12-01

    Full Text Available The scaled particle theory is developed for the description of thermodynamical properties of a mixture of hard spheres and hard spherocylinders. Analytical expressions for free energy, pressure and chemical potentials are derived. From the minimization of free energy, a nonlinear integral equation for the orientational singlet distribution function is formulated. An isotropic-nematic phase transition in this mixture is investigated from the bifurcation analysis of this equation. It is shown that with an increase of concentration of hard spheres, the total packing fraction of a mixture on phase boundaries slightly increases. The obtained results are compared with computer simulations data.

  3. Texture of low temperature isotropic pyrocarbons

    International Nuclear Information System (INIS)

    Pelissier, Joseph; Lombard, Louis.

    1976-01-01

    Isotropic pyrocarbon deposited on fuel particles was studied by transmission electron microscopy in order to determine its texture. The material consists of an agglomerate of spherical growth features similar to those of carbon black. The spherical growth features are formed from the cristallites of turbostratic carbon and the distribution gives an isotropic structure. Neutron irradiation modifies the morphology of the pyrocarbon. The spherical growth features are deformed and the coating becomes strongly anisotropic. The transformation leads to the rupture of the coating caused by strong irradiation doses [fr

  4. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  5. Social sciences via network analysis and computation

    CERN Document Server

    Kanduc, Tadej

    2015-01-01

    In recent years information and communication technologies have gained significant importance in the social sciences. Because there is such rapid growth of knowledge, methods and computer infrastructure, research can now seamlessly connect interdisciplinary fields such as business process management, data processing and mathematics. This study presents some of the latest results, practices and state-of-the-art approaches in network analysis, machine learning, data mining, data clustering and classifications in the contents of social sciences. It also covers various real-life examples such as t

  6. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  7. Symbolic Computing in Probabilistic and Stochastic Analysis

    Directory of Open Access Journals (Sweden)

    Kamiński Marcin

    2015-12-01

    Full Text Available The main aim is to present recent developments in applications of symbolic computing in probabilistic and stochastic analysis, and this is done using the example of the well-known MAPLE system. The key theoretical methods discussed are (i analytical derivations, (ii the classical Monte-Carlo simulation approach, (iii the stochastic perturbation technique, as well as (iv some semi-analytical approaches. It is demonstrated in particular how to engage the basic symbolic tools implemented in any system to derive the basic equations for the stochastic perturbation technique and how to make an efficient implementation of the semi-analytical methods using an automatic differentiation and integration provided by the computer algebra program itself. The second important illustration is probabilistic extension of the finite element and finite difference methods coded in MAPLE, showing how to solve boundary value problems with random parameters in the environment of symbolic computing. The response function method belongs to the third group, where interference of classical deterministic software with the non-linear fitting numerical techniques available in various symbolic environments is displayed. We recover in this context the probabilistic structural response in engineering systems and show how to solve partial differential equations including Gaussian randomness in their coefficients.

  8. A transversely isotropic medium with a tilted symmetry axis normal to the reflector

    KAUST Repository

    Alkhalifah, Tariq Ali

    2010-05-01

    The computational tools for imaging in transversely isotropic media with tilted axes of symmetry (TTI) are complex and in most cases do not have an explicit closed-form representation. Developing such tools for a TTI medium with tilt constrained to be normal to the reflector dip (DTI) reduces their complexity and allows for closed-form representations. The homogeneous-case zero-offset migration in such a medium can be performed using an isotropic operator scaled by the velocity of the medium in the tilt direction. For the nonzero-offset case, the reflection angle is always equal to the incidence angle, and thus, the velocities for the source and receiver waves at the reflection point are equal and explicitly dependent on the reflection angle. This fact allows for the development of explicit representations for angle decomposition as well as moveout formulas for analysis of extended images obtained by wave-equation migration. Although setting the tilt normal to the reflector dip may not be valid everywhere (i.e., on salt flanks), it can be used in the process of velocity model building, in which such constrains are useful and typically are used. © 2010 Society of Exploration Geophysicists.

  9. Apparent splitting of S waves propagating through an isotropic lowermost mantle

    KAUST Repository

    Parisi, Laura

    2018-03-24

    Observations of shear‐wave anisotropy are key for understanding the mineralogical structure and flow in the mantle. Several researchers have reported the presence of seismic anisotropy in the lowermost 150–250 km of the mantle (i.e., D” layer), based on differences in the arrival times of vertically (SV) and horizontally (SH) polarized shear waves. By computing waveforms at period > 6 s for a wide range of 1‐D and 3‐D Earth structures we illustrate that a time shift (i.e., apparent splitting) between SV and SH may appear in purely isotropic simulations. This may be misinterpreted as shear wave anisotropy. For near‐surface earthquakes, apparent shear wave splitting can result from the interference of S with the surface reflection sS. For deep earthquakes, apparent splitting can be due to the S‐wave triplication in D”, reflections off discontinuities in the upper mantle and 3‐D heterogeneity. The wave effects due to anomalous isotropic structure may not be easily distinguished from purely anisotropic effects if the analysis does not involve full waveform simulations.

  10. A transversely isotropic medium with a tilted symmetry axis normal to the reflector

    KAUST Repository

    Alkhalifah, Tariq Ali; Sava, Paul C.

    2010-01-01

    The computational tools for imaging in transversely isotropic media with tilted axes of symmetry (TTI) are complex and in most cases do not have an explicit closed-form representation. Developing such tools for a TTI medium with tilt constrained to be normal to the reflector dip (DTI) reduces their complexity and allows for closed-form representations. The homogeneous-case zero-offset migration in such a medium can be performed using an isotropic operator scaled by the velocity of the medium in the tilt direction. For the nonzero-offset case, the reflection angle is always equal to the incidence angle, and thus, the velocities for the source and receiver waves at the reflection point are equal and explicitly dependent on the reflection angle. This fact allows for the development of explicit representations for angle decomposition as well as moveout formulas for analysis of extended images obtained by wave-equation migration. Although setting the tilt normal to the reflector dip may not be valid everywhere (i.e., on salt flanks), it can be used in the process of velocity model building, in which such constrains are useful and typically are used. © 2010 Society of Exploration Geophysicists.

  11. Apparent splitting of S waves propagating through an isotropic lowermost mantle

    KAUST Repository

    Parisi, Laura; Ferreira, Ana M. G.; Ritsema, Jeroen

    2018-01-01

    Observations of shear‐wave anisotropy are key for understanding the mineralogical structure and flow in the mantle. Several researchers have reported the presence of seismic anisotropy in the lowermost 150–250 km of the mantle (i.e., D” layer), based on differences in the arrival times of vertically (SV) and horizontally (SH) polarized shear waves. By computing waveforms at period > 6 s for a wide range of 1‐D and 3‐D Earth structures we illustrate that a time shift (i.e., apparent splitting) between SV and SH may appear in purely isotropic simulations. This may be misinterpreted as shear wave anisotropy. For near‐surface earthquakes, apparent shear wave splitting can result from the interference of S with the surface reflection sS. For deep earthquakes, apparent splitting can be due to the S‐wave triplication in D”, reflections off discontinuities in the upper mantle and 3‐D heterogeneity. The wave effects due to anomalous isotropic structure may not be easily distinguished from purely anisotropic effects if the analysis does not involve full waveform simulations.

  12. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  13. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  14. Computational advances in transition phase analysis

    International Nuclear Information System (INIS)

    Morita, K.; Kondo, S.; Tobita, Y.; Shirakawa, N.; Brear, D.J.; Fischer, E.A.

    1994-01-01

    In this paper, historical perspective and recent advances are reviewed on computational technologies to evaluate a transition phase of core disruptive accidents in liquid-metal fast reactors. An analysis of the transition phase requires treatment of multi-phase multi-component thermohydraulics coupled with space- and energy-dependent neutron kinetics. Such a comprehensive modeling effort was initiated when the program of SIMMER-series computer code development was initiated in the late 1970s in the USA. Successful application of the latest SIMMER-II in USA, western Europe and Japan have proved its effectiveness, but, at the same time, several areas that require further research have been identified. Based on the experience and lessons learned during the SIMMER-II application through 1980s, a new project of SIMMER-III development is underway at the Power Reactor and Nuclear Fuel Development Corporation (PNC), Japan. The models and methods of SIMMER-III are briefly described with emphasis on recent advances in multi-phase multi-component fluid dynamics technologies and their expected implication on a future reliable transition phase analysis. (author)

  15. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  16. Wave propagation in isotropic- or composite-material piping conveying swirling liquid

    International Nuclear Information System (INIS)

    Chen, T.L.C.; Bert, C.W.

    1977-01-01

    An analysis is presented for the propagation of free harmonic waves in a thin-walled, circular cylindrical shell of orthotropic or isotropic material conveying a swirling flow. The shell motion is modeled by using the dynamic orthotropic version of the Sanders improved first-approximation linear shell theory and the fluid forces are described by using inviscid incompressible flow theory. Frequency spectra are presented for pipes made of isotropic material and composite materials of current engineering interest. (Auth.)

  17. Computational Analysis of Human Blood Flow

    Science.gov (United States)

    Panta, Yogendra; Marie, Hazel; Harvey, Mark

    2009-11-01

    Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.

  18. Geometry of isotropic convex bodies

    CERN Document Server

    Brazitikos, Silouanos; Valettas, Petros; Vritsiou, Beatrice-Helen

    2014-01-01

    The study of high-dimensional convex bodies from a geometric and analytic point of view, with an emphasis on the dependence of various parameters on the dimension stands at the intersection of classical convex geometry and the local theory of Banach spaces. It is also closely linked to many other fields, such as probability theory, partial differential equations, Riemannian geometry, harmonic analysis and combinatorics. It is now understood that the convexity assumption forces most of the volume of a high-dimensional convex body to be concentrated in some canonical way and the main question is whether, under some natural normalization, the answer to many fundamental questions should be independent of the dimension. The aim of this book is to introduce a number of well-known questions regarding the distribution of volume in high-dimensional convex bodies, which are exactly of this nature: among them are the slicing problem, the thin shell conjecture and the Kannan-Lov�sz-Simonovits conjecture. This book prov...

  19. Software development for specific geometry and safe design of isotropic material multicell beams

    International Nuclear Information System (INIS)

    Tariq, M.M.; Ahmed, M.A.

    2011-01-01

    Comparison of analytical results with finite element results for analysis of isotropic material multicell beams subjected to free torsion case is the main idea of this paper. Progress in the fundamentals and applications of advanced materials and their processing technologies involves costly experiments and prototype testing for reliability. The software development for design analysis of structures with advanced materials is a low cost but challenging research. Multicell beams have important industrial applications in the aerospace and automotive sectors. This paper explains software development to test different materials in design of a multicell beam. Objective of this paper is to compute the torsional loading of multicell beams of isotropic materials for safe design in both symmetrical and asymmetrical geometries. Software has been developed in Microsoft Visual Basic. Distribution of Saint Venant shear flows, shear stresses, factors of safety, volume, mass, weight, twist, polar moment of inertia and aspect ratio for free torsion in multicell beam have been calculated using this software. The software works on four algorithms, these are, Specific geometry algorithm, material selection algorithm, factor of safety algorithm and global algorithm. User can specify new materials analytically, or choose a pre-defined material from the list, which includes, plain carbon steels, low alloy steels, stainless steels, cast irons, aluminum alloys, copper alloys, magnesium alloys, titanium alloys, precious metals and refractory metals. Although this software is restricted to multicell beam comprising of three cells, however future versions can have ability to address more complicated shapes and cases of multicell beams. Software also describes nomenclature and mathematical formulas applied to help user understand the theoretical background. User can specify geometry of multicell beam for three rectangular cells. Software computes shear flows, shear stresses, safety factors

  20. Computer-aided Fault Tree Analysis

    International Nuclear Information System (INIS)

    Willie, R.R.

    1978-08-01

    A computer-oriented methodology for deriving minimal cut and path set families associated with arbitrary fault trees is discussed first. Then the use of the Fault Tree Analysis Program (FTAP), an extensive FORTRAN computer package that implements the methodology is described. An input fault tree to FTAP may specify the system state as any logical function of subsystem or component state variables or complements of these variables. When fault tree logical relations involve complements of state variables, the analyst may instruct FTAP to produce a family of prime implicants, a generalization of the minimal cut set concept. FTAP can also identify certain subsystems associated with the tree as system modules and provide a collection of minimal cut set families that essentially expresses the state of the system as a function of these module state variables. Another FTAP feature allows a subfamily to be obtained when the family of minimal cut sets or prime implicants is too large to be found in its entirety; this subfamily consists only of sets that are interesting to the analyst in a special sense

  1. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    International Nuclear Information System (INIS)

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-01-01

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  2. Computational Analysis of Material Flow During Friction Stir Welding of AA5059 Aluminum Alloys

    Science.gov (United States)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Ochterbeck, J. M.; Yen, C.-F.; Cheeseman, B. A.; Reynolds, A. P.; Sutton, M. A.

    2012-09-01

    Workpiece material flow and stirring/mixing during the friction stir welding (FSW) process are investigated computationally. Within the numerical model of the FSW process, the FSW tool is treated as a Lagrangian component while the workpiece material is treated as an Eulerian component. The employed coupled Eulerian/Lagrangian computational analysis of the welding process was of a two-way thermo-mechanical character (i.e., frictional-sliding/plastic-work dissipation is taken to act as a heat source in the thermal-energy balance equation) while temperature is allowed to affect mechanical aspects of the model through temperature-dependent material properties. The workpiece material (AA5059, solid-solution strengthened and strain-hardened aluminum alloy) is represented using a modified version of the classical Johnson-Cook model (within which the strain-hardening term is augmented to take into account for the effect of dynamic recrystallization) while the FSW tool material (AISI H13 tool steel) is modeled as an isotropic linear-elastic material. Within the analysis, the effects of some of the FSW key process parameters are investigated (e.g., weld pitch, tool tilt-angle, and the tool pin-size). The results pertaining to the material flow during FSW are compared with their experimental counterparts. It is found that, for the most part, experimentally observed material-flow characteristics are reproduced within the current FSW-process model.

  3. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  4. Analysis on the security of cloud computing

    Science.gov (United States)

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  5. REDUCED ISOTROPIC CRYSTAL MODEL WITH RESPECT TO THE FOURTH-ORDER ELASTIC MODULI

    Directory of Open Access Journals (Sweden)

    O. Burlayenko

    2018-04-01

    Full Text Available Using a reduced isotropic crystal model the relationship between the fourth-order elastic moduli of an isotropic medium and the independent components of the fourth-order elastic moduli tensor of real crystals of various crystal systems is found. To calculate the coefficients of these relations, computer algebra systems Redberry and Mathematica for working with high order tensors in the symbolic and explicit form were used, in light of the overly complex computation. In an isotropic medium, there are four independent fourth order elastic moduli. This is due to the presence of four invariants for an eighth-rank tensor in the three-dimensional space, that has symmetries over the pairs of indices. As an example, the moduli of elasticity of an isotropic medium corresponding to certain crystals of cubic system are given (LiF, NaCl, MgO, CaF2. From the obtained results it can be seen that the reduced isotropic crystal model can be most effectively applied to high-symmetry crystal systems.

  6. Incremental ALARA cost/benefit computer analysis

    International Nuclear Information System (INIS)

    Hamby, P.

    1987-01-01

    Commonwealth Edison Company has developed and is testing an enhanced Fortran Computer Program to be used for cost/benefit analysis of Radiation Reduction Projects at its six nuclear power facilities and Corporate Technical Support Groups. This paper describes a Macro-Diven IBM Mainframe Program comprised of two different types of analyses-an Abbreviated Program with fixed costs and base values, and an extended Engineering Version for a detailed, more through and time-consuming approach. The extended engineering version breaks radiation exposure costs down into two components-Health-Related Costs and Replacement Labor Costs. According to user input, the program automatically adjust these two cost components and applies the derivation to company economic analyses such as replacement power costs, carrying charges, debt interest, and capital investment cost. The results from one of more program runs using different parameters may be compared in order to determine the most appropriate ALARA dose reduction technique. Benefits of this particular cost / benefit analysis technique includes flexibility to accommodate a wide range of user data and pre-job preparation, as well as the use of proven and standardized company economic equations

  7. Computing in Qualitative Analysis: A Healthy Development?

    Science.gov (United States)

    Richards, Lyn; Richards, Tom

    1991-01-01

    Discusses the potential impact of computers in qualitative health research. Describes the original goals, design, and implementation of NUDIST, a qualitative computing software. Argues for evaluation of the impact of computer techniques and for an opening of debate among program developers and users to address the purposes and power of computing…

  8. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  9. Depression of nonlinearity in decaying isotropic turbulence

    International Nuclear Information System (INIS)

    Kraichnan, R.H.; Panda, R.

    1988-01-01

    Simulations of decaying isotropic Navier--Stokes turbulence exhibit depression of the normalized mean-square nonlinear term to 57% of the value for a Gaussianly distributed velocity field with the same instantaneous velocity spectrum. Similar depression is found for dynamical models with random coupling coefficients (modified Betchov models). This suggests that the depression is dynamically generic rather than specifically driven by alignment of velocity and vorticity

  10. Can cloud computing benefit health services? - a SWOT analysis.

    Science.gov (United States)

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  11. New criteria for isotropic and textured metals

    Science.gov (United States)

    Cazacu, Oana

    2018-05-01

    In this paper a isotropic criterion expressed in terms of both invariants of the stress deviator, J2 and J3 is proposed. This criterion involves a unique parameter, α, which depends only on the ratio between the yield stresses in uniaxial tension and pure shear. If this parameter is zero, the von Mises yield criterion is recovered; if a is positive the yield surface is interior to the von Mises yield surface whereas when a is negative, the new yield surface is exterior to it. Comparison with polycrystalline calculations using Taylor-Bishop-Hill model [1] for randomly oriented face-centered (FCC) polycrystalline metallic materials show that this new criterion captures well the numerical yield points. Furthermore, the criterion reproduces well yielding under combined tension-shear loadings for a variety of isotropic materials. An extension of this isotropic yield criterion such as to account for orthotropy in yielding is developed using the generalized invariants approach of Cazacu and Barlat [2]. This new orthotropic criterion is general and applicable to three-dimensional stress states. The procedure for the identification of the material parameters is outlined. Illustration of the predictive capabilities of the new orthotropic is demonstrated through comparison between the model predictions and data on aluminum sheet samples.

  12. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  13. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  14. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  15. Computational intelligence for big data analysis frontier advances and applications

    CERN Document Server

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  16. Numerical implementation of a transverse-isotropic inelastic, work-hardening constitutive model

    International Nuclear Information System (INIS)

    Baladi, G.Y.

    1977-01-01

    During the past few decades the dramatic growth of computer technology has been paralleled by an increasing degree of complexity in material constitutive modeling. This paper documents the numerical implementation of one of these models, specifically a transverse-isotropic, inelastic, work-hardening constitutive model which is developed elsewhere by the author. (Auth.)

  17. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil

    2016-10-12

    The horizontal transversely isotropic model, with arbitrary symmetry axis orientation, is the simplest effective representative that explains the azimuthal behaviour of seismic data. Estimating the anisotropy parameters of this model is important in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous elliptic background model (which might be obtained from velocity analysis and well velocities). This is accomplished through a Taylor\\'s series expansion of the travel-time solution (of the eikonal equation) as a function of parameter η and azimuth angle ϕ. The accuracy of the travel time expansion is enhanced by the use of Shanks transform. This results in an accurate approximation of the solution of the non-linear eikonal equation and provides a mechanism to scan simultaneously for the best fitting effective parameters η and ϕ, without the need for repetitive modelling of travel times. The analysis of the travel time sensitivity to parameters η and ϕ reveals that travel times are more sensitive to η than to the symmetry axis azimuth ϕ. Thus, η is better constrained from travel times than the azimuth. Moreover, the two-parameter scan in the homogeneous case shows that errors in the background model affect the estimation of η and ϕ differently. While a gradual increase in errors in the background model leads to increasing errors in η, inaccuracies in ϕ, on the other hand, depend on the background model errors. We also propose a layer-stripping method valid for a stack of arbitrary oriented symmetry axis horizontal transversely isotropic layers to convert the effective parameters to the interval layer values.

  18. Computational systems analysis of dopamine metabolism.

    Directory of Open Access Journals (Sweden)

    Zhen Qi

    2008-06-01

    Full Text Available A prominent feature of Parkinson's disease (PD is the loss of dopamine in the striatum, and many therapeutic interventions for the disease are aimed at restoring dopamine signaling. Dopamine signaling includes the synthesis, storage, release, and recycling of dopamine in the presynaptic terminal and activation of pre- and post-synaptic receptors and various downstream signaling cascades. As an aid that might facilitate our understanding of dopamine dynamics in the pathogenesis and treatment in PD, we have begun to merge currently available information and expert knowledge regarding presynaptic dopamine homeostasis into a computational model, following the guidelines of biochemical systems theory. After subjecting our model to mathematical diagnosis and analysis, we made direct comparisons between model predictions and experimental observations and found that the model exhibited a high degree of predictive capacity with respect to genetic and pharmacological changes in gene expression or function. Our results suggest potential approaches to restoring the dopamine imbalance and the associated generation of oxidative stress. While the proposed model of dopamine metabolism is preliminary, future extensions and refinements may eventually serve as an in silico platform for prescreening potential therapeutics, identifying immediate side effects, screening for biomarkers, and assessing the impact of risk factors of the disease.

  19. Computational Analysis of Pharmacokinetic Behavior of Ampicillin

    Directory of Open Access Journals (Sweden)

    Mária Ďurišová

    2016-07-01

    Full Text Available orrespondence: Institute of Experimental Pharmacology and Toxicology, Slovak Academy of Sciences, 841 04 Bratislava, Slovak Republic. Phone + 42-1254775928; Fax +421254775928; E-mail: maria.durisova@savba.sk 84 RESEARCH ARTICLE The objective of this study was to perform a computational analysis of the pharmacokinetic behavior of ampicillin, using data from the literature. A method based on the theory of dynamic systems was used for modeling purposes. The method used has been introduced to pharmacokinetics with the aim to contribute to the knowledge base in pharmacokinetics by including the modeling method which enables researchers to develop mathematical models of various pharmacokinetic processes in an identical way, using identical model structures. A few examples of a successful use of the modeling method considered here in pharmacokinetics can be found in full texts articles available free of charge at the website of the author, and in the example given in the this study. The modeling method employed in this study can be used to develop a mathematical model of the pharmacokinetic behavior of any drug, under the condition that the pharmacokinetic behavior of the drug under study can be at least partially approximated using linear models.

  20. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  1. Numerical implementation of a transverse-isotropic inelastic, work-hardening constitutive model

    International Nuclear Information System (INIS)

    Baladi, G.Y.

    1978-01-01

    The numerical implementation of a transverse-isotropic inelastic, work-hardening plastic constitutive model is documented. A brief review of the model is presented first to facilitate the understanding of its numerical implementation. This model is formulated in terms of 'pseudo' stress invariants, so that the incremental stress-strain relationship can be readily incorporated into existing finite-difference or infinite-element computer codes. The anisotropic model reduces to its isotropic counterpart without any changes in the mathematical formulation or in the numerical implementation (algorithm) of the model. A typical example of the model and its behavior in uniaxial strain and triaxial compression is presented. (Auth.)

  2. Codesign Analysis of a Computer Graphics Application

    DEFF Research Database (Denmark)

    Madsen, Jan; Brage, Jens P.

    1996-01-01

    This paper describes a codesign case study where a computer graphics application is examined with the intention to speed up its execution. The application is specified as a C program, and is characterized by the lack of a simple compute-intensive kernel. The hardware/software partitioning is based...

  3. Architectural analysis for wirelessly powered computing platforms

    NARCIS (Netherlands)

    Kapoor, A.; Pineda de Gyvez, J.

    2013-01-01

    We present a design framework for wirelessly powered generic computing platforms that takes into account various system parameters in response to a time-varying energy source. These parameters are the charging profile of the energy source, computing speed (fclk), digital supply voltage (VDD), energy

  4. Computational Intelligence in Intelligent Data Analysis

    CERN Document Server

    Nürnberger, Andreas

    2013-01-01

    Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intell...

  5. Computer vision syndrome (CVS) - Thermographic Analysis

    Science.gov (United States)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  6. Analysis of Computer Network Information Based on "Big Data"

    Science.gov (United States)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  7. Calculated isotropic Raman spectra from interacting H2-rare-gas pairs

    International Nuclear Information System (INIS)

    Gustafsson, M; Głaz, W; Bancewicz, T; Godet, J-L; Maroulis, G; Haskapoulos, A

    2014-01-01

    We report on a theoretical study of the H 2 -He and H 2 -Ar pair trace-polarizability and the corresponding isotropic Raman spectra. The conventional quantum mechanical approach for calculations of interaction-induced spectra, which is based on an isotropic interaction potential, is employed. This is compared with a close-coupling approach, which allows for inclusion of the full, anisotropic potential. It is established that the anisotropy of the potential plays a minor role for these spectra. The computed isotropic collision-induced Raman intensity, which is due to dissimilar pairs in H 2 -He and H 2 -Ar gas mixtures, is comparable to the intensities due to similar pairs (H 2 -H 2 , He-He, and Ar-Ar), which have been studied previously

  8. An Analysis of 27 Years of Research into Computer Education Published in Australian Educational Computing

    Science.gov (United States)

    Zagami, Jason

    2015-01-01

    Analysis of three decades of publications in Australian Educational Computing (AEC) provides insight into the historical trends in Australian educational computing, highlighting an emphasis on pedagogy, comparatively few articles on educational technologies, and strong research topic alignment with similar international journals. Analysis confirms…

  9. Manipulation of surface plasmon polariton propagation on isotropic and anisotropic two-dimensional materials coupled to boron nitride heterostructures

    Energy Technology Data Exchange (ETDEWEB)

    Inampudi, Sandeep; Nazari, Mina; Forouzmand, Ali; Mosallaei, Hossein, E-mail: hosseinm@coe.neu.edu [Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave., Boston, Massachusetts 02115 (United States)

    2016-01-14

    We present a comprehensive analysis of surface plasmon polariton dispersion characteristics associated with isotropic and anisotropic two-dimensional atomically thin layered materials (2D sheets) coupled to h-BN heterostructures. A scattering matrix based approach is presented to compute the electromagnetic fields and related dispersion characteristics of stacked layered systems composed of anisotropic 2D sheets and uniaxial bulk materials. We analyze specifically the surface plasmon polariton (SPP) dispersion characteristics in case of isolated and coupled two-dimensional layers with isotropic and anisotropic conductivities. An analysis based on residue theorem is utilized to identify optimum optical parameters (surface conductivity) and geometrical parameters (separation between layers) to maximize the SPP field at a given position. The effect of type and degree of anisotropy on the shapes of iso-frequency curves and propagation characteristics is discussed in detail. The analysis presented in this paper gives an insight to identify optimum setup to enhance the SPP field at a given position and in a given direction on the surface of two-dimensional materials.

  10. Interbasis expansions for isotropic harmonic oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Shi-Hai, E-mail: dongsh2@yahoo.com [Departamento de Física, Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, Edificio 9, Unidad Profesional Adolfo López Mateos, Mexico D.F. 07738 (Mexico)

    2012-03-12

    The exact solutions of the isotropic harmonic oscillator are reviewed in Cartesian, cylindrical polar and spherical coordinates. The problem of interbasis expansions of the eigenfunctions is solved completely. The explicit expansion coefficients of the basis for given coordinates in terms of other two coordinates are presented for lower excited states. Such a property is occurred only for those degenerated states for given principal quantum number n. -- Highlights: ► Exact solutions of harmonic oscillator are reviewed in three coordinates. ► Interbasis expansions of the eigenfunctions is solved completely. ► This is occurred only for those degenerated states for given quantum number n.

  11. Gravitational instability in isotropic MHD plasma waves

    Science.gov (United States)

    Cherkos, Alemayehu Mengesha

    2018-04-01

    The effect of compressive viscosity, thermal conductivity and radiative heat-loss functions on the gravitational instability of infinitely extended homogeneous MHD plasma has been investigated. By taking in account these parameters we developed the six-order dispersion relation for magnetohydrodynamic (MHD) waves propagating in a homogeneous and isotropic plasma. The general dispersion relation has been developed from set of linearized basic equations and solved analytically to analyse the conditions of instability and instability of self-gravitating plasma embedded in a constant magnetic field. Our result shows that the presence of viscosity and thermal conductivity in a strong magnetic field substantially modifies the fundamental Jeans criterion of gravitational instability.

  12. Computer science: Data analysis meets quantum physics

    Science.gov (United States)

    Schramm, Steven

    2017-10-01

    A technique that combines machine learning and quantum computing has been used to identify the particles known as Higgs bosons. The method could find applications in many areas of science. See Letter p.375

  13. Charged Particle Diffusion in Isotropic Random Magnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Subedi, P.; Matthaeus, W. H.; Chuychai, P.; Parashar, T. N.; Chhiber, R. [Department of Physics and Astronomy, University of Delaware, Newark, Delaware 19716 (United States); Sonsrettee, W. [Faculty of Engineering and Technology, Panyapiwat Institute of Management, Nonthaburi 11120 (Thailand); Blasi, P. [INAF/Osservatorio Astrofisico di Arcetri, Largo E. Fermi, 5—I-50125 Firenze (Italy); Ruffolo, D. [Department of Physics, Faculty of Science, Mahidol University, Bangkok 10400 (Thailand); Montgomery, D. [Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755 (United States); Dmitruk, P. [Departamento de Física Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires Ciudad Universitaria, 1428 Buenos Aires (Argentina); Wan, M. [Department of Mechanics and Aerospace Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055 (China)

    2017-03-10

    The investigation of the diffusive transport of charged particles in a turbulent magnetic field remains a subject of considerable interest. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here we consider the diffusion of charged particles in fully three-dimensional isotropic turbulent magnetic fields with no mean field, which may be pertinent to many astrophysical situations. We identify different ranges of particle energy depending upon the ratio of Larmor radius to the characteristic outer length scale of turbulence. Two different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical results are compared to those from computer simulations, showing good agreement.

  14. Isotropic Surface Remeshing without Large and Small Angles

    KAUST Repository

    Wang, Yiqun; Yan, Dong-Ming; Liu, Xiaohan; Tang, Chengcheng; Guo, Jianwei; Zhang, Xiaopeng; Wonka, Peter

    2018-01-01

    We introduce a novel algorithm for isotropic surface remeshing which progressively eliminates obtuse triangles and improves small angles. The main novelty of the proposed approach is a simple vertex insertion scheme that facilitates the removal of large angles, and a vertex removal operation that improves the distribution of small angles. In combination with other standard local mesh operators, e.g., connectivity optimization and local tangential smoothing, our algorithm is able to remesh efficiently a low-quality mesh surface. Our approach can be applied directly or used as a post-processing step following other remeshing approaches. Our method has a similar computational efficiency to the fastest approach available, i.e., real-time adaptive remeshing [1]. In comparison with state-of-the-art approaches, our method consistently generates better results based on evaluations using different metrics.

  15. Isotropic Surface Remeshing without Large and Small Angles

    KAUST Repository

    Wang, Yiqun

    2018-05-18

    We introduce a novel algorithm for isotropic surface remeshing which progressively eliminates obtuse triangles and improves small angles. The main novelty of the proposed approach is a simple vertex insertion scheme that facilitates the removal of large angles, and a vertex removal operation that improves the distribution of small angles. In combination with other standard local mesh operators, e.g., connectivity optimization and local tangential smoothing, our algorithm is able to remesh efficiently a low-quality mesh surface. Our approach can be applied directly or used as a post-processing step following other remeshing approaches. Our method has a similar computational efficiency to the fastest approach available, i.e., real-time adaptive remeshing [1]. In comparison with state-of-the-art approaches, our method consistently generates better results based on evaluations using different metrics.

  16. Analysis On Security Of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Muhammad Zunnurain Hussain

    2017-01-01

    Full Text Available In this paper Author will be discussing the security issues and challenges faced by the industry in securing the cloud computing and how these problems can be tackled. Cloud computing is modern technique of sharing resources like data sharing file sharing basically sharing of resources without launching own infrastructure and using some third party resources to avoid huge investment . It is very challenging these days to secure the communication between two users although people use different encryption techniques 1.

  17. Schottky signal analysis: tune and chromaticity computation

    CERN Document Server

    Chanon, Ondine

    2016-01-01

    Schottky monitors are used to determine important beam parameters in a non-destructive way. The Schottky signal is due to the internal statistical fluctuations of the particles inside the beam. In this report, after explaining the different components of a Schottky signal, an algorithm to compute the betatron tune is presented, followed by some ideas to compute machine chromaticity. The tests have been performed with offline and/or online LHC data.

  18. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  19. Computer-Assisted Linguistic Analysis of the Peshitta

    NARCIS (Netherlands)

    Roorda, D.; Talstra, Eep; Dyk, Janet; van Keulen, Percy; Sikkel, Constantijn; Bosman, H.J.; Jenner, K.D.; Bakker, Dirk; Volkmer, J.A.; Gutman, Ariel; van Peursen, Wido Th.

    2014-01-01

    CALAP (Computer-Assisted Linguistic Analysis of the Peshitta), a joint research project of the Peshitta Institute Leiden and the Werkgroep Informatica at the Vrije Universiteit Amsterdam (1999-2005) CALAP concerned the computer-assisted analysis of the Peshitta to Kings (Janet Dyk and Percy van

  20. Run 2 analysis computing for CDF and D0

    International Nuclear Information System (INIS)

    Fuess, S.

    1995-11-01

    Two large experiments at the Fermilab Tevatron collider will use upgraded of running. The associated analysis software is also expected to change, both to account for higher data rates and to embrace new computing paradigms. A discussion is given to the problems facing current and future High Energy Physics (HEP) analysis computing, and several issues explored in detail

  1. Frequency modulation television analysis: Threshold impulse analysis. [with computer program

    Science.gov (United States)

    Hodge, W. H.

    1973-01-01

    A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.

  2. Acoustic reflection log in transversely isotropic formations

    Science.gov (United States)

    Ronquillo Jarillo, G.; Markova, I.; Markov, M.

    2018-01-01

    We have calculated the waveforms of sonic reflection logging for a fluid-filled borehole located in a transversely isotropic rock. Calculations have been performed for an acoustic impulse source with the characteristic frequency of tens of kilohertz that is considerably less than the frequencies of acoustic borehole imaging tools. It is assumed that the borehole axis coincides with the axis of symmetry of the transversely isotropic rock. It was shown that the reflected wave was excited most efficiently at resonant frequencies. These frequencies are close to the frequencies of oscillations of a fluid column located in an absolutely rigid hollow cylinder. We have shown that the acoustic reverberation is controlled by the acoustic impedance of the rock Z = Vphρs for fixed parameters of the borehole fluid, where Vph is the velocity of horizontally propagating P-wave; ρs is the rock density. The methods of waveform processing to determine the parameters characterizing the reflected wave have been discussed.

  3. Computational Analysis of SAXS Data Acquisition.

    Science.gov (United States)

    Dong, Hui; Kim, Jin Seob; Chirikjian, Gregory S

    2015-09-01

    Small-angle x-ray scattering (SAXS) is an experimental biophysical method used for gaining insight into the structure of large biomolecular complexes. Under appropriate chemical conditions, the information obtained from a SAXS experiment can be equated to the pair distribution function, which is the distribution of distances between every pair of points in the complex. Here we develop a mathematical model to calculate the pair distribution function for a structure of known density, and analyze the computational complexity of these calculations. Efficient recursive computation of this forward model is an important step in solving the inverse problem of recovering the three-dimensional density of biomolecular structures from their pair distribution functions. In particular, we show that integrals of products of three spherical-Bessel functions arise naturally in this context. We then develop an algorithm for the efficient recursive computation of these integrals.

  4. Quantitative study of neurofilament-positive fiber length in rat spinal cord lesions using isotropic virtual planes

    DEFF Research Database (Denmark)

    von Euler, Mia; Larsen, Jytte Overgaard; Janson, A M

    1998-01-01

    analysis after spinal cord injury is needed. Length quantification of the putatively spontaneously regenerating fibers has been difficult until recently, when two length estimators based on sampling with isotropic virtual planes within thick physical sections were introduced. The applicability...

  5. Computational and Physical Analysis of Catalytic Compounds

    Science.gov (United States)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  6. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...

  7. Computer programs simplify optical system analysis

    Science.gov (United States)

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  8. Analysis of airways in computed tomography

    DEFF Research Database (Denmark)

    Petersen, Jens

    Chronic Obstructive Pulmonary Disease (COPD) is major cause of death and disability world-wide. It affects lung function through destruction of lung tissue known as emphysema and inflammation of airways, leading to thickened airway walls and narrowed airway lumen. Computed Tomography (CT) imaging...

  9. Affect and Learning : a computational analysis

    NARCIS (Netherlands)

    Broekens, Douwe Joost

    2007-01-01

    In this thesis we have studied the influence of emotion on learning. We have used computational modelling techniques to do so, more specifically, the reinforcement learning paradigm. Emotion is modelled as artificial affect, a measure that denotes the positiveness versus negativeness of a situation

  10. Temperature-dependent study of isotropic-nematic transition for a Gay-Berne fluid using density-functional theory

    International Nuclear Information System (INIS)

    Singh, Ram Chandra

    2007-01-01

    We have used the density-functional theory to study the effect of varying temperature on the isotropic-nematic transition of a fluid of molecules interacting via the Gay-Berne intermolecular potential. The nematic phase is found to be stable with respect to isotropic phase in the temperature range 0.80≤T*≤1.25. Pair correlation functions needed as input information in density-functional theory is calculated using the Percus-Yevick integral equation theory. We find that the density-functional theory is good for studying the isotropic-nematic transition in molecular fluids if the values of the pair-correlation functions in the isotropic phase are known accurately. We have also compared our results with computer simulation results wherever they are available

  11. Adapting computational text analysis to social science (and vice versa

    Directory of Open Access Journals (Sweden)

    Paul DiMaggio

    2015-11-01

    Full Text Available Social scientists and computer scientist are divided by small differences in perspective and not by any significant disciplinary divide. In the field of text analysis, several such differences are noted: social scientists often use unsupervised models to explore corpora, whereas many computer scientists employ supervised models to train data; social scientists hold to more conventional causal notions than do most computer scientists, and often favor intense exploitation of existing algorithms, whereas computer scientists focus more on developing new models; and computer scientists tend to trust human judgment more than social scientists do. These differences have implications that potentially can improve the practice of social science.

  12. Isotropic 3D cardiac cine MRI allows efficient sparse segmentation strategies based on 3D surface reconstruction.

    Science.gov (United States)

    Odille, Freddy; Bustin, Aurélien; Liu, Shufang; Chen, Bailiang; Vuissoz, Pierre-André; Felblinger, Jacques; Bonnemains, Laurent

    2018-05-01

    Segmentation of cardiac cine MRI data is routinely used for the volumetric analysis of cardiac function. Conventionally, 2D contours are drawn on short-axis (SAX) image stacks with relatively thick slices (typically 8 mm). Here, an acquisition/reconstruction strategy is used for obtaining isotropic 3D cine datasets; reformatted slices are then used to optimize the manual segmentation workflow. Isotropic 3D cine datasets were obtained from multiple 2D cine stacks (acquired during free-breathing in SAX and long-axis (LAX) orientations) using nonrigid motion correction (cine-GRICS method) and super-resolution. Several manual segmentation strategies were then compared, including conventional SAX segmentation, LAX segmentation in three views only, and combinations of SAX and LAX slices. An implicit B-spline surface reconstruction algorithm is proposed to reconstruct the left ventricular cavity surface from the sparse set of 2D contours. All tested sparse segmentation strategies were in good agreement, with Dice scores above 0.9 despite using fewer slices (3-6 sparse slices instead of 8-10 contiguous SAX slices). When compared to independent phase-contrast flow measurements, stroke volumes computed from four or six sparse slices had slightly higher precision than conventional SAX segmentation (error standard deviation of 5.4 mL against 6.1 mL) at the cost of slightly lower accuracy (bias of -1.2 mL against 0.2 mL). Functional parameters also showed a trend to improved precision, including end-diastolic volumes, end-systolic volumes, and ejection fractions). The postprocessing workflow of 3D isotropic cardiac imaging strategies can be optimized using sparse segmentation and 3D surface reconstruction. Magn Reson Med 79:2665-2675, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  14. Interface between computational fluid dynamics (CFD) and plant analysis computer codes

    International Nuclear Information System (INIS)

    Coffield, R.D.; Dunckhorst, F.F.; Tomlinson, E.T.; Welch, J.W.

    1993-01-01

    Computational fluid dynamics (CFD) can provide valuable input to the development of advanced plant analysis computer codes. The types of interfacing discussed in this paper will directly contribute to modeling and accuracy improvements throughout the plant system and should result in significant reduction of design conservatisms that have been applied to such analyses in the past

  15. Computational analysis of ozonation in bubble columns

    International Nuclear Information System (INIS)

    Quinones-Bolanos, E.; Zhou, H.; Otten, L.

    2002-01-01

    This paper presents a new computational ozonation model based on the principle of computational fluid dynamics along with the kinetics of ozone decay and microbial inactivation to predict the performance of ozone disinfection in fine bubble columns. The model can be represented using a mixture two-phase flow model to simulate the hydrodynamics of the water flow and using two transport equations to track the concentration profiles of ozone and microorganisms along the height of the column, respectively. The applicability of this model was then demonstrated by comparing the simulated ozone concentrations with experimental measurements obtained from a pilot scale fine bubble column. One distinct advantage of this approach is that it does not require the prerequisite assumptions such as plug flow condition, perfect mixing, tanks-in-series, uniform radial or longitudinal dispersion in predicting the performance of disinfection contactors without carrying out expensive and tedious tracer studies. (author)

  16. A tilted transversely isotropic slowness surface approximation

    KAUST Repository

    Stovas, A.

    2012-05-09

    The relation between vertical and horizontal slownesses, better known as the dispersion relation, for transversely isotropic media with a tilted symmetry axis (TTI) requires solving a quartic polynomial equation, which does not admit a practical explicit solution to be used, for example, in downward continuation. Using a combination of the perturbation theory with respect to the anelliptic parameter and Shanks transform to improve the accuracy of the expansion, we develop an explicit formula for the vertical slowness that is highly accurate for all practical purposes. It also reveals some insights into the anisotropy parameter dependency of the dispersion relation including the low impact that the anelliptic parameter has on the vertical placement of reflectors for a small tilt in the symmetry angle. © 2012 European Association of Geoscientists & Engineers.

  17. Linearized holographic isotropization at finite coupling

    Energy Technology Data Exchange (ETDEWEB)

    Atashi, Mahdi; Fadafan, Kazem Bitaghsir [Shahrood University of Technology, Physics Department (Iran, Islamic Republic of); Jafari, Ghadir [Institute for Research in Fundamental Sciences (IPM), School of Physics, Tehran (Iran, Islamic Republic of)

    2017-06-15

    We study holographic isotropization of an anisotropic homogeneous non-Abelian strongly coupled plasma in the presence of Gauss-Bonnet corrections. It was verified before that one can linearize Einstein's equations around the final black hole background and simplify the complicated setup. Using this approach, we study the expectation value of the boundary stress tensor. Although we consider small values of the Gauss-Bonnet coupling constant, it is found that finite coupling leads to significant increasing of the thermalization time. By including higher order corrections in linearization, we extend the results to study the effect of the Gauss-Bonnet coupling on the entropy production on the event horizon. (orig.)

  18. Effective elastic properties of damaged isotropic solids

    International Nuclear Information System (INIS)

    Lee, U Sik

    1998-01-01

    In continuum damage mechanics, damaged solids have been represented by the effective elastic stiffness into which local damage is smoothly smeared. Similarly, damaged solids may be represented in terms of effective elastic compliances. By virtue of the effective elastic compliance representation, it may become easier to derive the effective engineering constants of damaged solids from the effective elastic compliances, all in closed form. Thus, in this paper, by using a continuum modeling approach based on both the principle of strain energy equivalence and the equivalent elliptical micro-crack representation of local damage, the effective elastic compliance and effective engineering constants are derived in terms of the undamaged (virgin) elastic properties and a scalar damage variable for both damaged two-and three-dimensional isotropic solids

  19. New bounds on isotropic Lorentz violation

    International Nuclear Information System (INIS)

    Carone, Christopher D.; Sher, Marc; Vanderhaeghen, Marc

    2006-01-01

    Violations of Lorentz invariance that appear via operators of dimension four or less are completely parametrized in the Standard Model Extension (SME). In the pure photonic sector of the SME, there are 19 dimensionless, Lorentz-violating parameters. Eighteen of these have experimental upper bounds ranging between 10 -11 and 10 -32 ; the remaining parameter, k-tilde tr , is isotropic and has a much weaker bound of order 10 -4 . In this Brief Report, we point out that k-tilde tr gives a significant contribution to the anomalous magnetic moment of the electron and find a new upper bound of order 10 -8 . With reasonable assumptions, we further show that this bound may be improved to 10 -14 by considering the renormalization of other Lorentz-violating parameters that are more tightly constrained. Using similar renormalization arguments, we also estimate bounds on Lorentz-violating parameters in the pure gluonic sector of QCD

  20. Isotropic and anisotropic surface wave cloaking techniques

    International Nuclear Information System (INIS)

    McManus, T M; Spada, L La; Hao, Y

    2016-01-01

    In this paper we compare two different approaches for surface waves cloaking. The first technique is a unique application of Fermat’s principle and requires isotropic material properties, but owing to its derivation is limited in its applicability. The second technique utilises a geometrical optics approximation for dealing with rays bound to a two dimensional surface and requires anisotropic material properties, though it can be used to cloak any smooth surface. We analytically derive the surface wave scattering behaviour for both cloak techniques when applied to a rotationally symmetric surface deformation. Furthermore, we simulate both using a commercially available full-wave electromagnetic solver and demonstrate a good level of agreement with their analytically derived solutions. Our analytical solutions and simulations provide a complete and concise overview of two different surface wave cloaking techniques. (paper)

  1. Isotropic and anisotropic surface wave cloaking techniques

    Science.gov (United States)

    McManus, T. M.; La Spada, L.; Hao, Y.

    2016-04-01

    In this paper we compare two different approaches for surface waves cloaking. The first technique is a unique application of Fermat’s principle and requires isotropic material properties, but owing to its derivation is limited in its applicability. The second technique utilises a geometrical optics approximation for dealing with rays bound to a two dimensional surface and requires anisotropic material properties, though it can be used to cloak any smooth surface. We analytically derive the surface wave scattering behaviour for both cloak techniques when applied to a rotationally symmetric surface deformation. Furthermore, we simulate both using a commercially available full-wave electromagnetic solver and demonstrate a good level of agreement with their analytically derived solutions. Our analytical solutions and simulations provide a complete and concise overview of two different surface wave cloaking techniques.

  2. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  3. Hybrid soft computing systems for electromyographic signals analysis: a review

    Science.gov (United States)

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  4. Hybrid soft computing systems for electromyographic signals analysis: a review.

    Science.gov (United States)

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  5. Accident sequence analysis of human-computer interface design

    International Nuclear Information System (INIS)

    Fan, C.-F.; Chen, W.-H.

    2000-01-01

    It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points

  6. Application of microarray analysis on computer cluster and cloud platforms.

    Science.gov (United States)

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  7. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  8. Isogeometric analysis : a calculus for computational mechanics

    NARCIS (Netherlands)

    Benson, D.J.; Borst, de R.; Hughes, T.J.R.; Scott, M.A.; Verhoosel, C.V.; Topping, B.H.V.; Adam, J.M.; Pallarés, F.J.; Bru, R.; Romero, M.L.

    2010-01-01

    The first paper on isogeometric analysis appeared only five years ago [1], and the first book appeared last year [2]. Progress has been rapid. Isogeometric analysis has been applied to a wide variety of problems in solids, fluids and fluid-structure interactions. Superior accuracy to traditional

  9. Computer-Based Interaction Analysis with DEGREE Revisited

    Science.gov (United States)

    Barros, B.; Verdejo, M. F.

    2016-01-01

    We review our research with "DEGREE" and analyse how our work has impacted the collaborative learning community since 2000. Our research is framed within the context of computer-based interaction analysis and the development of computer-supported collaborative learning (CSCL) tools. We identify some aspects of our work which have been…

  10. Isotopic analysis of plutonium by computer controlled mass spectrometry

    International Nuclear Information System (INIS)

    1974-01-01

    Isotopic analysis of plutonium chemically purified by ion exchange is achieved using a thermal ionization mass spectrometer. Data acquisition from and control of the instrument is done automatically with a dedicated system computer in real time with subsequent automatic data reduction and reporting. Separation of isotopes is achieved by varying the ion accelerating high voltage with accurate computer control

  11. Computer Programme for the Dynamic Analysis of Tall Regular ...

    African Journals Online (AJOL)

    The traditional method of dynamic analysis of tall rigid frames assumes the shear frame model. Models that allow joint rotations with/without the inclusion of the column axial loads give improved results but pose much more computational difficulty. In this work a computer program Natfrequency that determines the dynamic ...

  12. Computer use and carpal tunnel syndrome: A meta-analysis.

    Science.gov (United States)

    Shiri, Rahman; Falah-Hassani, Kobra

    2015-02-15

    Studies have reported contradictory results on the role of keyboard or mouse use in carpal tunnel syndrome (CTS). This meta-analysis aimed to assess whether computer use causes CTS. Literature searches were conducted in several databases until May 2014. Twelve studies qualified for a random-effects meta-analysis. Heterogeneity and publication bias were assessed. In a meta-analysis of six studies (N=4964) that compared computer workers with the general population or other occupational populations, computer/typewriter use (pooled odds ratio (OR)=0.72, 95% confidence interval (CI) 0.58-0.90), computer/typewriter use ≥1 vs. computer/typewriter use ≥4 vs. computer/typewriter use (pooled OR=1.34, 95% CI 1.08-1.65), mouse use (OR=1.93, 95% CI 1.43-2.61), frequent computer use (OR=1.89, 95% CI 1.15-3.09), frequent mouse use (OR=1.84, 95% CI 1.18-2.87) and with years of computer work (OR=1.92, 95% CI 1.17-3.17 for long vs. short). There was no evidence of publication bias for both types of studies. Studies that compared computer workers with the general population or several occupational groups did not control their estimates for occupational risk factors. Thus, office workers with no or little computer use are a more appropriate comparison group than the general population or several occupational groups. This meta-analysis suggests that excessive computer use, particularly mouse usage might be a minor occupational risk factor for CTS. Further prospective studies among office workers with objectively assessed keyboard and mouse use, and CTS symptoms or signs confirmed by a nerve conduction study are needed. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  14. Analysis and Assessment of Computer-Supported Collaborative Learning Conversations

    NARCIS (Netherlands)

    Trausan-Matu, Stefan

    2008-01-01

    Trausan-Matu, S. (2008). Analysis and Assessment of Computer-Supported Collaborative Learning Conversations. Workshop presentation at the symposium Learning networks for professional. November, 14, 2008, Heerlen, Nederland: Open Universiteit Nederland.

  15. Surveillance Analysis Computer System (SACS) software requirements specification (SRS)

    International Nuclear Information System (INIS)

    Glasscock, J.A.; Flanagan, M.J.

    1995-09-01

    This document is the primary document establishing requirements for the Surveillance Analysis Computer System (SACS) Database, an Impact Level 3Q system. The purpose is to provide the customer and the performing organization with the requirements for the SACS Project

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  17. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  18. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  19. Use of computer codes for system reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  20. Computer System Analysis for Decommissioning Management of Nuclear Reactor

    International Nuclear Information System (INIS)

    Nurokhim; Sumarbagiono

    2008-01-01

    Nuclear reactor decommissioning is a complex activity that should be planed and implemented carefully. A system based on computer need to be developed to support nuclear reactor decommissioning. Some computer systems have been studied for management of nuclear power reactor. Software system COSMARD and DEXUS that have been developed in Japan and IDMT in Italy used as models for analysis and discussion. Its can be concluded that a computer system for nuclear reactor decommissioning management is quite complex that involved some computer code for radioactive inventory database calculation, calculation module on the stages of decommissioning phase, and spatial data system development for virtual reality. (author)

  1. System Matrix Analysis for Computed Tomography Imaging

    Science.gov (United States)

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  2. Computational analysis of sequence selection mechanisms.

    Science.gov (United States)

    Meyerguz, Leonid; Grasso, Catherine; Kleinberg, Jon; Elber, Ron

    2004-04-01

    Mechanisms leading to gene variations are responsible for the diversity of species and are important components of the theory of evolution. One constraint on gene evolution is that of protein foldability; the three-dimensional shapes of proteins must be thermodynamically stable. We explore the impact of this constraint and calculate properties of foldable sequences using 3660 structures from the Protein Data Bank. We seek a selection function that receives sequences as input, and outputs survival probability based on sequence fitness to structure. We compute the number of sequences that match a particular protein structure with energy lower than the native sequence, the density of the number of sequences, the entropy, and the "selection" temperature. The mechanism of structure selection for sequences longer than 200 amino acids is approximately universal. For shorter sequences, it is not. We speculate on concrete evolutionary mechanisms that show this behavior.

  3. Waterlike glass polyamorphism in a monoatomic isotropic Jagla model.

    Science.gov (United States)

    Xu, Limei; Giovambattista, Nicolas; Buldyrev, Sergey V; Debenedetti, Pablo G; Stanley, H Eugene

    2011-02-14

    We perform discrete-event molecular dynamics simulations of a system of particles interacting with a spherically-symmetric (isotropic) two-scale Jagla pair potential characterized by a hard inner core, a linear repulsion at intermediate separations, and a weak attractive interaction at larger separations. This model system has been extensively studied due to its ability to reproduce many thermodynamic, dynamic, and structural anomalies of liquid water. The model is also interesting because: (i) it is very simple, being composed of isotropically interacting particles, (ii) it exhibits polyamorphism in the liquid phase, and (iii) its slow crystallization kinetics facilitate the study of glassy states. There is interest in the degree to which the known polyamorphism in glassy water may have parallels in liquid water. Motivated by parallels between the properties of the Jagla potential and those of water in the liquid state, we study the metastable phase diagram in the glass state. Specifically, we perform the computational analog of the protocols followed in the experimental studies of glassy water. We find that the Jagla potential calculations reproduce three key experimental features of glassy water: (i) the crystal-to-high-density amorphous solid (HDA) transformation upon isothermal compression, (ii) the low-density amorphous solid (LDA)-to-HDA transformation upon isothermal compression, and (iii) the HDA-to-very-high-density amorphous solid (VHDA) transformation upon isobaric annealing at high pressure. In addition, the HDA-to-LDA transformation upon isobaric heating, observed in water experiments, can only be reproduced in the Jagla model if a free surface is introduced in the simulation box. The HDA configurations obtained in cases (i) and (ii) are structurally indistinguishable, suggesting that both processes result in the same glass. With the present parametrization, the evolution of density with pressure or temperature is remarkably similar to the

  4. Process for computing geometric perturbations for probabilistic analysis

    Science.gov (United States)

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  5. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  6. Scalar Statistics along Inertial Particle Trajectory in Isotropic Turbulence

    International Nuclear Information System (INIS)

    Ya-Ming, Liu; Zhao-Hui, Liu; Hai-Feng, Han; Jing, Li; Han-Feng, Wang; Chu-Guang, Zheng

    2009-01-01

    The statistics of a passive scalar along inertial particle trajectory in homogeneous isotropic turbulence with a mean scalar gradient is investigated by using direct numerical simulation. We are interested in the influence of particle inertia on such statistics, which is crucial for further understanding and development of models in non-isothermal gas-particle flows. The results show that the scalar variance along particle trajectory decreases with the increasing particle inertia firstly; when the particle's Stokes number S t is less than 1.0, it reaches the minimal value when S t is around 1.0, then it increases if S t increases further. However, the scalar dissipation rate along the particle trajectory shows completely contrasting behavior in comparison with the scalar variance. The mechanical-to-thermal time scale ratios averaged along particle, p , are approximately two times smaller than that computed in the Eulerian frame r, and stay at nearly 1.77 with a weak dependence on particle inertia. In addition, the correlations between scalar dissipation and now structure characteristics along particle trajectories, such as strain and vorticity, are also computed, and they reach their maximum and minimum, 0.31 and 0.25, respectively, when S t is around 1.0. (fundamental areas of phenomenology (including applications))

  7. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  8. On the decay of homogeneous isotropic turbulence

    Science.gov (United States)

    Skrbek, L.; Stalp, Steven R.

    2000-08-01

    Decaying homogeneous, isotropic turbulence is investigated using a phenomenological model based on the three-dimensional turbulent energy spectra. We generalize the approach first used by Comte-Bellot and Corrsin [J. Fluid Mech. 25, 657 (1966)] and revised by Saffman [J. Fluid Mech. 27, 581 (1967); Phys. Fluids 10, 1349 (1967)]. At small wave numbers we assume the spectral energy is proportional to the wave number to an arbitrary power. The specific case of power 2, which follows from the Saffman invariant, is discussed in detail and is later shown to best describe experimental data. For the spectral energy density in the inertial range we apply both the Kolmogorov -5/3 law, E(k)=Cɛ2/3k-5/3, and the refined Kolmogorov law by taking into account intermittency. We show that intermittency affects the energy decay mainly by shifting the position of the virtual origin rather than altering the power law of the energy decay. Additionally, the spectrum is naturally truncated due to the size of the wind tunnel test section, as eddies larger than the physical size of the system cannot exist. We discuss effects associated with the energy-containing length scale saturating at the size of the test section and predict a change in the power law decay of both energy and vorticity. To incorporate viscous corrections to the model, we truncate the spectrum at an effective Kolmogorov wave number kη=γ(ɛ/v3)1/4, where γ is a dimensionless parameter of order unity. We show that as the turbulence decays, viscous corrections gradually become more important and a simple power law can no longer describe the decay. We discuss the final period of decay within the framework of our model, and show that care must be taken to distinguish between the final period of decay and the change of the character of decay due to the saturation of the energy containing length scale. The model is applied to a number of experiments on decaying turbulence. These include the downstream decay of turbulence in

  9. Isotropic compression of cohesive-frictional particles with rolling resistance

    NARCIS (Netherlands)

    Luding, Stefan; Benz, Thomas; Nordal, Steinar

    2010-01-01

    Cohesive-frictional and rough powders are the subject of this study. The behavior under isotropic compression is examined for different material properties involving Coulomb friction, rolling-resistance and contact-adhesion. Under isotropic compression, the density continuously increases according

  10. The revised geometric measure of entanglement for isotropic state

    International Nuclear Information System (INIS)

    Cao Ya

    2011-01-01

    Based on the revised geometric measure of entanglement (RGME), we obtain the analytical expression of isotropic state and generalize to n-particle and d-dimension mixed state case. Meantime, we obtain the relation about isotropic state E-tilde sin 2 (ρ) ≤ E re (ρ). The results indicate RGME is an appropriate measure of entanglement. (authors)

  11. Contact mechanics and friction for transversely isotropic viscoelastic materials

    NARCIS (Netherlands)

    Mokhtari, Milad; Schipper, Dirk J.; Vleugels, N.; Noordermeer, Jacobus W.M.; Yoshimoto, S.; Hashimoto, H.

    2015-01-01

    Transversely isotropic materials are an unique group of materials whose properties are the same along two of the principal axes of a Cartesian coordinate system. Various natural and artificial materials behave effectively as transversely isotropic elastic solids. Several materials can be classified

  12. Computational Analysis of Spray Jet Flames

    Science.gov (United States)

    Jain, Utsav

    There is a boost in the utilization of renewable sources of energy but because of high energy density applications, combustion will never be obsolete. Spray combustion is a type of multiphase combustion which has tremendous engineering applications in different fields, varying from energy conversion devices to rocket propulsion system. Developing accurate computational models for turbulent spray combustion is vital for improving the design of combustors and making them energy efficient. Flamelet models have been extensively used for gas phase combustion because of their relatively low computational cost to model the turbulence-chemistry interaction using a low dimensional manifold approach. This framework is designed for gas phase non-premixed combustion and its implementation is not very straight forward for multiphase and multi-regime combustion such as spray combustion. This is because of the use of a conserved scalar and various flamelet related assumptions. Mixture fraction has been popularly employed as a conserved scalar and hence used to parameterize the characteristics of gaseous flamelets. However, for spray combustion, the mixture fraction is not monotonic and does not give a unique mapping in order to parameterize the structure of spray flames. In order to develop a flamelet type model for spray flames, a new variable called the mixing variable is introduced which acts as an ideal conserved scalar and takes into account the convection and evaporation of fuel droplets. In addition to the conserved scalar, it has been observed that though gaseous flamelets can be characterized by the conserved scalar and its dissipation, this might not be true for spray flamelets. Droplet dynamics has a significant influence on the spray flamelet and because of effects such as flame penetration of droplets and oscillation of droplets across the stagnation plane, it becomes important to accommodate their influence in the flamelet formulation. In order to recognize the

  13. Computational analysis of thresholds for magnetophosphenes

    International Nuclear Information System (INIS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-01-01

    In international guidelines, basic restriction limits on the exposure of humans to low-frequency magnetic and electric fields are set with the objective of preventing the generation of phosphenes, visual sensations of flashing light not caused by light. Measured data on magnetophosphenes, i.e. phosphenes caused by a magnetically induced electric field on the retina, are available from volunteer studies. However, there is no simple way for determining the retinal threshold electric field or current density from the measured threshold magnetic flux density. In this study, the experimental field configuration of a previous study, in which phosphenes were generated in volunteers by exposing their heads to a magnetic field between the poles of an electromagnet, is computationally reproduced. The finite-element method is used for determining the induced electric field and current in five different MRI-based anatomical models of the head. The direction of the induced current density on the retina is dominantly radial to the eyeball, and the maximum induced current density is observed at the superior and inferior sides of the retina, which agrees with literature data on the location of magnetophosphenes at the periphery of the visual field. On the basis of computed data, the macroscopic retinal threshold current density for phosphenes at 20 Hz can be estimated as 10 mA m −2 (−20% to  + 30%, depending on the anatomical model); this current density corresponds to an induced eddy current of 14 μA (−20% to  + 10%), and about 20% of this eddy current flows through each eye. The ICNIRP basic restriction limit for the induced electric field in the case of occupational exposure is not exceeded until the magnetic flux density is about two to three times the measured threshold for magnetophosphenes, so the basic restriction limit does not seem to be conservative. However, the reasons for the non-conservativeness are purely technical: removal of the highest 1% of

  14. Computer-automated neutron activation analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Garcia, S.R.

    1983-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. 5 references

  15. Computed tomographic analysis of urinary calculi

    International Nuclear Information System (INIS)

    Naito, Akira; Ito, Katsuhide; Ito, Shouko

    1986-01-01

    Computed tomography (CT) was employed in an effort to analyze the chemical composition of urinary calculi. Twenty-three surgically removed calculi were scanned in a water bath (in vitro study). Forteen of them in the body were scanned (in vivo study). The calculi consisted of four types: mixed calcium oxalate and phosphate, mixed calcium carbonate and phosphate, magnesium ammonium phosphate, and uric acid. The in vitro study showed that the mean and maximum CT values of uric acid stones were significantly lower than those of the other three types of stones. This indicated that stones with less than 450 HU are composed of uric acid. In an in vivo study, CT did not help to differentiate the three types of urinary calculi, except for uric acid stones. Regarding the mean CT values, there was no correlation between in vitro and in vivo studies. An experiment with commercially available drugs showed that CT values of urinary calculi were not dependent upon the composition, but dependent upon the density of the calculi. (Namekawa, K.)

  16. Analysis of computational vulnerabilities in digital repositories

    Directory of Open Access Journals (Sweden)

    Valdete Fernandes Belarmino

    2015-04-01

    Full Text Available Objective. Demonstrates the results of research that aimed to analyze the computational vulnerabilities of digital directories in public Universities. Argues the relevance of information in contemporary societies like an invaluable resource, emphasizing scientific information as an essential element to constitute scientific progress. Characterizes the emergence of Digital Repositories and highlights its use in academic environment to preserve, promote, disseminate and encourage the scientific production. Describes the main software for the construction of digital repositories. Method. The investigation identified and analyzed the vulnerabilities that are exposed the digital repositories using Penetration Testing running. Discriminating the levels of risk and the types of vulnerabilities. Results. From a sample of 30 repositories, we could examine 20, identified that: 5% of the repositories have critical vulnerabilities, 85% high, 25% medium and 100% lowers. Conclusions. Which demonstrates the necessity to adapt actions for these environments that promote informational security to minimizing the incidence of external and / or internal systems attacks.Abstract Grey Text – use bold for subheadings when needed.

  17. The NANOGrav Nine-year Data Set: Limits on the Isotropic Stochastic Gravitational Wave Background

    OpenAIRE

    Arzoumanian, Zaven; Brazier, Adam; Burke-Spolaor, Sarah; Chamberlin, Sydney; Chatterjee, Shami; Christy, Brian; Cordes, Jim; Cornish, Neil; Demorest, Paul; Deng, Xihao; Dolch, Tim; Ellis, Justin; Ferdman, Rob; Fonseca, Emmanuel; Garver-Daniels, Nate

    2015-01-01

    We compute upper limits on the nanohertz-frequency isotropic stochastic gravitational wave background (GWB) using the 9-year data release from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration. We set upper limits for a GWB from supermassive black hole binaries under power law, broken power law, and free spectral coefficient GW spectrum models. We place a 95\\% upper limit on the strain amplitude (at a frequency of yr$^{-1}$) in the power law model of $A...

  18. Classification and Analysis of Computer Network Traffic

    OpenAIRE

    Bujlow, Tomasz

    2014-01-01

    Traffic monitoring and analysis can be done for multiple different reasons: to investigate the usage of network resources, assess the performance of network applications, adjust Quality of Service (QoS) policies in the network, log the traffic to comply with the law, or create realistic models of traffic for academic purposes. We define the objective of this thesis as finding a way to evaluate the performance of various applications in a high-speed Internet infrastructure. To satisfy the obje...

  19. How isotropic can the UHECR flux be?

    Science.gov (United States)

    di Matteo, Armando; Tinyakov, Peter

    2018-05-01

    Modern observatories of ultra-high energy cosmic rays (UHECR) have collected over 104 events with energies above 10 EeV, whose arrival directions appear to be nearly isotropically distributed. On the other hand, the distribution of matter in the nearby Universe - and therefore presumably also that of UHECR sources - is not homogeneous. This is expected to leave an imprint on the angular distribution of UHECR arrival directions, though deflections by cosmic magnetic fields can confound the picture. In this work, we investigate quantitatively this apparent inconsistency. To this end we study observables sensitive to UHECR source inhomogeneities but robust to uncertainties on magnetic fields and the UHECR mass composition. We show, in a rather model-independent way, that if the source distribution tracks the overall matter distribution, the arrival directions at energies above 30 EeV should exhibit a sizeable dipole and quadrupole anisotropy, detectable by UHECR observatories in the very near future. Were it not the case, one would have to seriously reconsider the present understanding of cosmic magnetic fields and/or the UHECR composition. Also, we show that the lack of a strong quadrupole moment above 10 EeV in the current data already disfavours a pure proton composition, and that in the very near future measurements of the dipole and quadrupole moment above 60 EeV will be able to provide evidence about the UHECR mass composition at those energies.

  20. On isotropic cylindrically symmetric stellar models

    International Nuclear Information System (INIS)

    Nolan, Brien C; Nolan, Louise V

    2004-01-01

    We attempt to match the most general cylindrically symmetric vacuum spacetime with a Robertson-Walker interior. The matching conditions show that the interior must be dust filled and that the boundary must be comoving. Further, we show that the vacuum region must be polarized. Imposing the condition that there are no trapped cylinders on an initial time slice, we can apply a result of Thorne's and show that trapped cylinders never evolve. This results in a simplified line element which we prove to be incompatible with the dust interior. This result demonstrates the impossibility of the existence of an isotropic cylindrically symmetric star (or even a star which has a cylindrically symmetric portion). We investigate the problem from a different perspective by looking at the expansion scalars of invariant null geodesic congruences and, applying to the cylindrical case, the result that the product of the signs of the expansion scalars must be continuous across the boundary. The result may also be understood in relation to recent results about the impossibility of the static axially symmetric analogue of the Einstein-Straus model

  1. Nonlinear elastic inclusions in isotropic solids

    KAUST Repository

    Yavari, A.

    2013-10-16

    We introduce a geometric framework to calculate the residual stress fields and deformations of nonlinear solids with inclusions and eigenstrains. Inclusions are regions in a body with different reference configurations from the body itself and can be described by distributed eigenstrains. Geometrically, the eigenstrains define a Riemannian 3-manifold in which the body is stress-free by construction. The problem of residual stress calculation is then reduced to finding a mapping from the Riemannian material manifold to the ambient Euclidean space. Using this construction, we find the residual stress fields of three model systems with spherical and cylindrical symmetries in both incompressible and compressible isotropic elastic solids. In particular, we consider a finite spherical ball with a spherical inclusion with uniform pure dilatational eigenstrain and we show that the stress in the inclusion is uniform and hydrostatic. We also show how singularities in the stress distribution emerge as a consequence of a mismatch between radial and circumferential eigenstrains at the centre of a sphere or the axis of a cylinder.

  2. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  3. An approach to quantum-computational hydrologic inverse analysis.

    Science.gov (United States)

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  4. Cafts: computer aided fault tree analysis

    International Nuclear Information System (INIS)

    Poucet, A.

    1985-01-01

    The fault tree technique has become a standard tool for the analysis of safety and reliability of complex system. In spite of the costs, which may be high for a complete and detailed analysis of a complex plant, the fault tree technique is popular and its benefits are fully recognized. Due to this applications of these codes have mostly been restricted to simple academic examples and rarely concern complex, real world systems. In this paper an interactive approach to fault tree construction is presented. The aim is not to replace the analyst, but to offer him an intelligent tool which can assist him in modeling complex systems. Using the CAFTS-method, the analyst interactively constructs a fault tree in two phases: (1) In a first phase he generates an overall failure logic structure of the system; the macrofault tree. In this phase, CAFTS features an expert system approach to assist the analyst. It makes use of a knowledge base containing generic rules on the behavior of subsystems and components; (2) In a second phase the macrofault tree is further refined and transformed in a fully detailed and quantified fault tree. In this phase a library of plant-specific component failure models is used

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  6. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  7. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  8. Conference “Computational Analysis and Optimization” (CAO 2011)

    CERN Document Server

    Tiihonen, Timo; Tuovinen, Tero; Numerical Methods for Differential Equations, Optimization, and Technological Problems : Dedicated to Professor P. Neittaanmäki on His 60th Birthday

    2013-01-01

    This book contains the results in numerical analysis and optimization presented at the ECCOMAS thematic conference “Computational Analysis and Optimization” (CAO 2011) held in Jyväskylä, Finland, June 9–11, 2011. Both the conference and this volume are dedicated to Professor Pekka Neittaanmäki on the occasion of his sixtieth birthday. It consists of five parts that are closely related to his scientific activities and interests: Numerical Methods for Nonlinear Problems; Reliable Methods for Computer Simulation; Analysis of Noised and Uncertain Data; Optimization Methods; Mathematical Models Generated by Modern Technological Problems. The book also includes a short biography of Professor Neittaanmäki.

  9. Computer code for qualitative analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Yule, H.P.

    1979-01-01

    Computer code QLN1 provides complete analysis of gamma-ray spectra observed with Ge(Li) detectors and is used at both the National Bureau of Standards and the Environmental Protection Agency. It locates peaks, resolves multiplets, identifies component radioisotopes, and computes quantitative results. The qualitative-analysis (or component identification) algorithms feature thorough, self-correcting steps which provide accurate isotope identification in spite of errors in peak centroids, energy calibration, and other typical problems. The qualitative-analysis algorithm is described in this paper

  10. A single-chip computer analysis system for liquid fluorescence

    International Nuclear Information System (INIS)

    Zhang Yongming; Wu Ruisheng; Li Bin

    1998-01-01

    The single-chip computer analysis system for liquid fluorescence is an intelligent analytic instrument, which is based on the principle that the liquid containing hydrocarbons can give out several characteristic fluorescences when irradiated by strong light. Besides a single-chip computer, the system makes use of the keyboard and the calculation and printing functions of a CASIO printing calculator. It combines optics, mechanism and electronics into one, and is small, light and practical, so it can be used for surface water sample analysis in oil field and impurity analysis of other materials

  11. A Computational Discriminability Analysis on Twin Fingerprints

    Science.gov (United States)

    Liu, Yu; Srihari, Sargur N.

    Sharing similar genetic traits makes the investigation of twins an important study in forensics and biometrics. Fingerprints are one of the most commonly found types of forensic evidence. The similarity between twins’ prints is critical establish to the reliability of fingerprint identification. We present a quantitative analysis of the discriminability of twin fingerprints on a new data set (227 pairs of identical twins and fraternal twins) recently collected from a twin population using both level 1 and level 2 features. Although the patterns of minutiae among twins are more similar than in the general population, the similarity of fingerprints of twins is significantly different from that between genuine prints of the same finger. Twins fingerprints are discriminable with a 1.5%~1.7% higher EER than non-twins. And identical twins can be distinguished by examine fingerprint with a slightly higher error rate than fraternal twins.

  12. Content Analysis of a Computer-Based Faculty Activity Repository

    Science.gov (United States)

    Baker-Eveleth, Lori; Stone, Robert W.

    2013-01-01

    The research presents an analysis of faculty opinions regarding the introduction of a new computer-based faculty activity repository (FAR) in a university setting. The qualitative study employs content analysis to better understand the phenomenon underlying these faculty opinions and to augment the findings from a quantitative study. A web-based…

  13. Computer-Aided Communication Satellite System Analysis and Optimization.

    Science.gov (United States)

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  14. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  15. Computer-Aided Qualitative Data Analysis with Word

    Directory of Open Access Journals (Sweden)

    Bruno Nideröst

    2002-05-01

    Full Text Available Despite some fragmentary references in the literature about qualitative methods, it is fairly unknown that Word can be successfully used for computer-aided Qualitative Data Analyses (QDA. Based on several Word standard operations, elementary QDA functions such as sorting data, code-and-retrieve and frequency counts can be realized. Word is particularly interesting for those users who wish to have first experiences with computer-aided analysis before investing time and money in a specialized QDA Program. The well-known standard software could also be an option for those qualitative researchers who usually work with word processing but have certain reservations towards computer-aided analysis. The following article deals with the most important requirements and options of Word for computer-aided QDA. URN: urn:nbn:de:0114-fqs0202225

  16. Calculation of beam paths in optical systems containing inhomogeneous isotropic media with cylindrical distribution of the refractive index

    International Nuclear Information System (INIS)

    Grammatin, A.P.; Degen, A.B.; Katranova, N.A.

    1995-01-01

    A system of differential equations convenient for numerical computer integrating is proposed to calculate beam paths, elementary astigmatic beams, and the optical path in isotropic media with cylindrical distribution of the refractive index. A method for selecting the step of this integration is proposed. This technique is implemented in the program package for computers of the VAX series meant for the computer-aided design of optical systems. 4 refs

  17. Computer programs for analysis of geophysical data

    Energy Technology Data Exchange (ETDEWEB)

    Rozhkov, M.; Nakanishi, K.

    1994-06-01

    This project is oriented toward the application of the mobile seismic array data analysis technique in seismic investigations of the Earth (the noise-array method). The technique falls into the class of emission tomography methods but, in contrast to classic tomography, 3-D images of the microseismic activity of the media are obtained by passive seismic antenna scanning of the half-space, rather than by solution of the inverse Radon`s problem. It is reasonable to expect that areas of geothermal activity, active faults, areas of volcanic tremors and hydrocarbon deposits act as sources of intense internal microseismic activity or as effective sources for scattered (secondary) waves. The conventional approaches of seismic investigations of a geological medium include measurements of time-limited determinate signals from artificial or natural sources. However, the continuous seismic oscillations, like endogenous microseisms, coda and scattering waves, can give very important information about the structure of the Earth. The presence of microseismic sources or inhomogeneities within the Earth results in the appearance of coherent seismic components in a stochastic wave field recorded on the surface by a seismic array. By careful processing of seismic array data, these coherent components can be used to develop a 3-D model of the microseismic activity of the media or images of the noisy objects. Thus, in contrast to classic seismology where narrow windows are used to get the best time resolution of seismic signals, our model requires long record length for the best spatial resolution.

  18. Computer programs for analysis of geophysical data

    International Nuclear Information System (INIS)

    Rozhkov, M.; Nakanishi, K.

    1994-06-01

    This project is oriented toward the application of the mobile seismic array data analysis technique in seismic investigations of the Earth (the noise-array method). The technique falls into the class of emission tomography methods but, in contrast to classic tomography, 3-D images of the microseismic activity of the media are obtained by passive seismic antenna scanning of the half-space, rather than by solution of the inverse Radon's problem. It is reasonable to expect that areas of geothermal activity, active faults, areas of volcanic tremors and hydrocarbon deposits act as sources of intense internal microseismic activity or as effective sources for scattered (secondary) waves. The conventional approaches of seismic investigations of a geological medium include measurements of time-limited determinate signals from artificial or natural sources. However, the continuous seismic oscillations, like endogenous microseisms, coda and scattering waves, can give very important information about the structure of the Earth. The presence of microseismic sources or inhomogeneities within the Earth results in the appearance of coherent seismic components in a stochastic wave field recorded on the surface by a seismic array. By careful processing of seismic array data, these coherent components can be used to develop a 3-D model of the microseismic activity of the media or images of the noisy objects. Thus, in contrast to classic seismology where narrow windows are used to get the best time resolution of seismic signals, our model requires long record length for the best spatial resolution

  19. Introducing remarks upon the analysis of computer systems performance

    International Nuclear Information System (INIS)

    Baum, D.

    1980-05-01

    Some of the basis ideas of analytical techniques to study the behaviour of computer systems are presented. Single systems as well as networks of computers are viewed as stochastic dynamical systems which may be modelled by queueing networks. Therefore this report primarily serves as an introduction to probabilistic methods for qualitative analysis of systems. It is supplemented by an application example of Chandy's collapsing method. (orig.) [de

  20. Computer-aided visualization and analysis system for sequence evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  1. Strategic Analysis of Autodesk and the Move to Cloud Computing

    OpenAIRE

    Kewley, Kathleen

    2012-01-01

    This paper provides an analysis of the opportunity for Autodesk to move its core technology to a cloud delivery model. Cloud computing offers clients a number of advantages, such as lower costs for computer hardware, increased access to technology and greater flexibility. With the IT industry embracing this transition, software companies need to plan for future change and lead with innovative solutions. Autodesk is in a unique position to capitalize on this market shift, as it is the leader i...

  2. Computational Aspects of Dam Risk Analysis: Findings and Challenges

    Directory of Open Access Journals (Sweden)

    Ignacio Escuder-Bueno

    2016-09-01

    Full Text Available In recent years, risk analysis techniques have proved to be a useful tool to inform dam safety management. This paper summarizes the outcomes of three themes related to dam risk analysis discussed in the Benchmark Workshops organized by the International Commission on Large Dams Technical Committee on “Computational Aspects of Analysis and Design of Dams.” In the 2011 Benchmark Workshop, estimation of the probability of failure of a gravity dam for the sliding failure mode was discussed. Next, in 2013, the discussion focused on the computational challenges of the estimation of consequences in dam risk analysis. Finally, in 2015, the probability of sliding and overtopping in an embankment was analyzed. These Benchmark Workshops have allowed a complete review of numerical aspects for dam risk analysis, showing that risk analysis methods are a very useful tool to analyze the risk of dam systems, including downstream consequence assessments and the uncertainty of structural models.

  3. A SURVEY ON DOCUMENT CLUSTERING APPROACH FOR COMPUTER FORENSIC ANALYSIS

    OpenAIRE

    Monika Raghuvanshi*, Rahul Patel

    2016-01-01

    In a forensic analysis, large numbers of files are examined. Much of the information comprises of in unstructured format, so it’s quite difficult task for computer forensic to perform such analysis. That’s why to do the forensic analysis of document within a limited period of time require a special approach such as document clustering. This paper review different document clustering algorithms methodologies for example K-mean, K-medoid, single link, complete link, average link in accorandance...

  4. Isotropic nuclear graphites; the effect of neutron irradiation

    International Nuclear Information System (INIS)

    Lore, J.; Buscaillon, A.; Mottet, P.; Micaud, G.

    1977-01-01

    Several isotropic graphites have been manufactured using different forming processes and fillers such as needle coke, regular coke, or pitch coke. Their properties are described in this paper. Specimens of these products have been irradiated in the fast reactor Rapsodie between 400 to 1400 0 C, at fluences up to 1,7.10 21 n.cm -2 PHI.FG. The results show an isotropic behavior under neutron irradiation, but the induced dimensional changes are higher than those of isotropic coke graphites although they are lower than those of conventional extruded graphites made with the same coke

  5. Process for the preparation of isotropic petroleum coke

    International Nuclear Information System (INIS)

    Kegler, W.H.; Huyser, M.E.

    1975-01-01

    A description is given of a process for preparing isotropic coke from oil residue charge. It includes blowing air into the residue until it reaches a softening temperature of around 49 to 116 deg C, the deferred coking of the residue having undergone blowing at a temperature of around 247 to 640 deg C, at a pressure between around 1.38x10 5 and 1.72x10 6 Pa, and the recovery of isotropic coke with a thermal expansion coefficient ratio under 1.5 approximately. The isotropic coke is used for preparing hexagonal graphite bars for nuclear reactor moderators [fr

  6. Sudden Relaminarization and Lifetimes in Forced Isotropic Turbulence.

    Science.gov (United States)

    Linkmann, Moritz F; Morozov, Alexander

    2015-09-25

    We demonstrate an unexpected connection between isotropic turbulence and wall-bounded shear flows. We perform direct numerical simulations of isotropic turbulence forced at large scales at moderate Reynolds numbers and observe sudden transitions from a chaotic dynamics to a spatially simple flow, analogous to the laminar state in wall bounded shear flows. We find that the survival probabilities of turbulence are exponential and the typical lifetimes increase superexponentially with the Reynolds number. Our results suggest that both isotropic turbulence and wall-bounded shear flows qualitatively share the same phase-space dynamics.

  7. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  8. PIXAN: the Lucas Heights PIXE analysis computer package

    International Nuclear Information System (INIS)

    Clayton, E.

    1986-11-01

    To fully utilise the multielement capability and short measurement time of PIXE it is desirable to have an automated computer evaluation of the measured spectra. Because of the complex nature of PIXE spectra, a critical step in the analysis is the data reduction, in which the areas of characteristic peaks in the spectrum are evaluated. In this package the computer program BATTY is presented for such an analysis. The second step is to determine element concentrations, knowing the characteristic peak areas in the spectrum. This requires a knowledge of the expected X-ray yield for that element in the sample. The computer program THICK provides that information for both thick and thin PIXE samples. Together, these programs form the package PIXAN used at Lucas Heights for PIXE analysis

  9. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  10. Conceptual design of pipe whip restraints using interactive computer analysis

    International Nuclear Information System (INIS)

    Rigamonti, G.; Dainora, J.

    1975-01-01

    Protection against pipe break effects necessitates a complex interaction between failure mode analysis, piping layout, and structural design. Many iterations are required to finalize structural designs and equipment arrangements. The magnitude of the pipe break loads transmitted by the pipe whip restraints to structural embedments precludes the application of conservative design margins. A simplified analytical formulation of the nonlinear dynamic problems associated with pipe whip has been developed and applied using interactive computer analysis techniques. In the dynamic analysis, the restraint and the associated portion of the piping system, are modeled using the finite element lumped mass approach to properly reflect the dynamic characteristics of the piping/restraint system. The analysis is performed as a series of piecewise linear increments. Each of these linear increments is terminated by either formation of plastic conditions or closing/opening of gaps. The stiffness matrix is modified to reflect the changed stiffness characteristics of the system and re-started using the previous boundary conditions. The formation of yield hinges are related to the plastic moment of the section and unloading paths are automatically considered. The conceptual design of the piping/restraint system is performed using interactive computer analysis. The application of the simplified analytical approach with interactive computer analysis results in an order of magnitude reduction in engineering time and computer cost. (Auth.)

  11. Computer aided plant engineering: An analysis and suggestions for computer use

    International Nuclear Information System (INIS)

    Leinemann, K.

    1979-09-01

    To get indications to and boundary conditions for computer use in plant engineering, an analysis of the engineering process was done. The structure of plant engineering is represented by a network of substaks and subsets of data which are to be manipulated. Main tool for integration of CAD-subsystems in plant engineering should be a central database which is described by characteristical requirements and a possible simple conceptual schema. The main features of an interactive system for computer aided plant engineering are shortly illustrated by two examples. The analysis leads to the conclusion, that an interactive graphic system for manipulation of net-like structured data, usable for various subtasks, should be the base for computer aided plant engineering. (orig.) [de

  12. Precession of elastic waves in vibrating isotropic spheres and transversely isotropic cylinders subjected to inertial rotation

    CSIR Research Space (South Africa)

    Joubert, S

    2006-05-01

    Full Text Available and Manufacturing TRANSVERSELY ISOTROPIC CYLINDER - 1 φ φ r z a x y Ω P P O u v w z ( )1 1 1 2 1 1 rrr rz rr zr r zrz zz rz u r r z r v r r z r w r r z r ϕ ϕϕ ϕϕ ϕϕ ϕ ϕ σσ σ σ σ ρ ϕ σσ σ σ ρ ϕ σσ σ σ ρ ϕ... ∂ ∂ ∂ + + + − = ∂ ∂ ∂ ∂∂ ∂ + + + = ∂ ∂ ∂ ∂∂ ∂ + + + = ∂ ∂ ∂ && && && 6 CSIR Material Science and Manufacturing TRANSVERSELY ISOTROPIC CYLINDER - 2 ( )1 1 1 2 1 1 rrr rz rr zr r zrz zz rz u r r z r v r r z r w r r z r ϕ ϕϕ ϕϕ ϕϕ ϕ ϕ σσ σ σ σ ρ ϕ σσ σ σ ρ ϕ σσ σ σ ρ ϕ...

  13. Investigating the computer analysis of eddy current NDT data

    International Nuclear Information System (INIS)

    Brown, R.L.

    1979-01-01

    The objective of this activity was to investigate and develop techniques for computer analysis of eddy current nondestructive testing (NDT) data. A single frequency commercial eddy current tester and a precision mechanical scanner were interfaced with a PDP-11/34 computer to obtain and analyze eddy current data from samples of 316 stainless steel tubing containing known discontinuities. Among the data analysis techniques investigated were: correlation, Fast Fourier Transforms (FFT), clustering, and Adaptive Learning Networks (ALN). The results were considered encouraging. ALN, for example, correctly identified 88% of the defects and non-defects from a group of 153 signal indications

  14. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  15. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  16. Improvement of the efficiency of two-dimensional multigroup transport calculations assuming isotropic reflection with multilevel spatial discretisation

    International Nuclear Information System (INIS)

    Stankovski, Z.; Zmijarevic, I.

    1987-06-01

    This paper presents two approximations used in multigroup two-dimensional transport calculations in large, very homogeneous media: isotropic reflection together with recently proposed group-dependent spatial representations. These approximations are implemented as standard options in APOLLO 2 assembly transport code. Presented example calculations show that significant savings in computational costs are obtained while preserving the overall accuracy

  17. A new technique for generating the isotropic and linearly anisotropic components of elastic and discrete inelastic transfer matrices

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    1984-01-01

    A new technique for generating the isotropic and linearly anisotropic componets of elastic and discrete inelastic transfer matrices is proposed. The technique allows certain angular integrals to be expressed in terms of functions that can be computed by recursion relations or series expansions alternatively to the use of numerical quadratures. (Author) [pt

  18. Isotropic-nematic transition of long, thin, hard spherocylinders confined in a quasi-two-dimensional planar geometry

    NARCIS (Netherlands)

    Lagomarsino, M.C.; Dogterom, M.; Dijkstra, Marjolein

    2003-01-01

    We present computer simulations of long, thin, hard spherocylinders in a narrow planar slit. We observe a transition from the isotropic to a nematic phase with quasi-long-range orientational order upon increasing the density. This phase transition is intrinsically two-dimensional and of

  19. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    KAUST Repository

    Zhang, Zhendong

    2017-12-17

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyze the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artifacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration (RTM) applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modeling engine performs better than an isotropic migration.

  20. Analysis of the computed tomography in the acute abdomen

    International Nuclear Information System (INIS)

    Hochhegger, Bruno; Moraes, Everton; Haygert, Carlos Jesus Pereira; Antunes, Paulo Sergio Pase; Gazzoni, Fernando; Lopes, Luis Felipe Dias

    2007-01-01

    Introduction: This study tends to test the capacity of the computed tomography in assist in the diagnosis and the approach of the acute abdomen. Material and method: This is a longitudinal and prospective study, in which were analyzed the patients with the diagnosis of acute abdomen. There were obtained 105 cases of acute abdomen and after the application of the exclusions criteria were included 28 patients in the study. Results: Computed tomography changed the diagnostic hypothesis of the physicians in 50% of the cases (p 0.05), where 78.57% of the patients had surgical indication before computed tomography and 67.86% after computed tomography (p = 0.0546). The index of accurate diagnosis of computed tomography, when compared to the anatomopathologic examination and the final diagnosis, was observed in 82.14% of the cases (p = 0.013). When the analysis was done dividing the patients in surgical and nonsurgical group, were obtained an accuracy of 89.28% (p 0.0001). The difference of 7.2 days of hospitalization (p = 0.003) was obtained compared with the mean of the acute abdomen without use the computed tomography. Conclusion: The computed tomography is correlative with the anatomopathology and has great accuracy in the surgical indication, associated with the capacity of increase the confident index of the physicians, reduces the hospitalization time, reduces the number of surgeries and is cost-effective. (author)

  1. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  2. Convergence Analysis of a Class of Computational Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  3. Analysis of Biosignals During Immersion in Computer Games.

    Science.gov (United States)

    Yeo, Mina; Lim, Seokbeen; Yoon, Gilwon

    2017-11-17

    The number of computer game users is increasing as computers and various IT devices in connection with the Internet are commonplace in all ages. In this research, in order to find the relevance of behavioral activity and its associated biosignal, biosignal changes before and after as well as during computer games were measured and analyzed for 31 subjects. For this purpose, a device to measure electrocardiogram, photoplethysmogram and skin temperature was developed such that the effect of motion artifacts could be minimized. The device was made wearable for convenient measurement. The game selected for the experiments was League of Legends™. Analysis on the pulse transit time, heart rate variability and skin temperature showed increased sympathetic nerve activities during computer game, while the parasympathetic nerves became less active. Interestingly, the sympathetic predominance group showed less change in the heart rate variability as compared to the normal group. The results can be valuable for studying internet gaming disorder.

  4. Weak convergence to isotropic complex [Formula: see text] random measure.

    Science.gov (United States)

    Wang, Jun; Li, Yunmeng; Sang, Liheng

    2017-01-01

    In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.

  5. Metrical relationships in a standard triangle in an isotropic plane

    OpenAIRE

    Kolar-Šuper, R.; Kolar-Begović, Z.; Volenec, V.; Beban-Brkić, J.

    2005-01-01

    Each allowable triangle of an isotropic plane can be set in a standard position, in which it is possible to prove geometric properties analytically in a simplified and easier way by means of the algebraic theory developed in this paper.

  6. Efficient anisotropic wavefield extrapolation using effective isotropic models

    KAUST Repository

    Alkhalifah, Tariq Ali; Ma, X.; Waheed, Umair bin; Zuberi, Mohammad

    2013-01-01

    Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented

  7. Isotropic 2D quadrangle meshing with size and orientation control

    KAUST Repository

    Pellenard, Bertrand; Alliez, Pierre; Morvan, Jean-Marie

    2011-01-01

    We propose an approach for automatically generating isotropic 2D quadrangle meshes from arbitrary domains with a fine control over sizing and orientation of the elements. At the heart of our algorithm is an optimization procedure that, from a coarse

  8. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil; Stovas, Alexey; Alkhalifah, Tariq Ali

    2016-01-01

    in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  11. SALP-PC, a computer program for fault tree analysis on personal computers

    International Nuclear Information System (INIS)

    Contini, S.; Poucet, A.

    1987-01-01

    The paper presents the main characteristics of the SALP-PC computer code for fault tree analysis. The program has been developed in Fortran 77 on an Olivetti M24 personal computer (IBM compatible) in order to reach a high degree of portability. It is composed of six processors implementing the different phases of the analysis procedure. This particular structure presents some advantages like, for instance, the restart facility and the possibility to develop an event tree analysis code. The set of allowed logical operators, i.e. AND, OR, NOT, K/N, XOR, INH, together with the possibility to define boundary conditions, make the SALP-PC code a powerful tool for risk assessment. (orig.)

  12. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  13. Recent developments of the NESSUS probabilistic structural analysis computer program

    Science.gov (United States)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  14. Sentiment analysis and ontology engineering an environment of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2016-01-01

    This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...

  15. Practical computer analysis of switch mode power supplies

    CERN Document Server

    Bennett, Johnny C

    2006-01-01

    When designing switch-mode power supplies (SMPSs), engineers need much more than simple "recipes" for analysis. Such plug-and-go instructions are not at all helpful for simulating larger and more complex circuits and systems. Offering more than merely a "cookbook," Practical Computer Analysis of Switch Mode Power Supplies provides a thorough understanding of the essential requirements for analyzing SMPS performance characteristics. It demonstrates the power of the circuit averaging technique when used with powerful computer circuit simulation programs. The book begins with SMPS fundamentals and the basics of circuit averaging models, reviewing most basic topologies and explaining all of their various modes of operation and control. The author then discusses the general analysis requirements of power supplies and how to develop the general types of SMPS models, demonstrating the use of SPICE for analysis. He examines the basic first-order analyses generally associated with SMPS performance along with more pra...

  16. The role of the computer in automated spectral analysis

    International Nuclear Information System (INIS)

    Rasmussen, S.E.

    This report describes how a computer can be an extremely valuable tool for routine analysis of spectra, which is a time consuming process. A number of general-purpose algorithms that are available for the various phases of the analysis can be implemented, if these algorithms are designed to cope with all the variations that may occur. Since this is basically impossible, one must find a compromise between obscure error and program complexity. This is usually possible with human interaction at critical points. In spectral analysis this is possible if the user scans the data on an interactive graphics terminal, makes the necessary changes and then returns control to the computer for completion of the analysis

  17. Integrating computer programs for engineering analysis and design

    Science.gov (United States)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  18. Integration of rocket turbine design and analysis through computer graphics

    Science.gov (United States)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  19. MULGRES: a computer program for stepwise multiple regression analysis

    Science.gov (United States)

    A. Jeff Martin

    1971-01-01

    MULGRES is a computer program source deck that is designed for multiple regression analysis employing the technique of stepwise deletion in the search for most significant variables. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

  20. Conversation Analysis in Computer-Assisted Language Learning

    Science.gov (United States)

    González-Lloret, Marta

    2015-01-01

    The use of Conversation Analysis (CA) in the study of technology-mediated interactions is a recent methodological addition to qualitative research in the field of Computer-assisted Language Learning (CALL). The expansion of CA in Second Language Acquisition research, coupled with the need for qualitative techniques to explore how people interact…

  1. Computational content analysis of European Central Bank statements

    NARCIS (Netherlands)

    Milea, D.V.; Almeida, R.J.; Sharef, N.M.; Kaymak, U.; Frasincar, F.

    2012-01-01

    In this paper we present a framework for the computational content analysis of European Central Bank (ECB) statements. Based on this framework, we provide two approaches that can be used in a practical context. Both approaches use the content of ECB statements to predict upward and downward movement

  2. Componential analysis of kinship terminology a computational perspective

    CERN Document Server

    Pericliev, V

    2013-01-01

    This book presents the first computer program automating the task of componential analysis of kinship vocabularies. The book examines the program in relation to two basic problems: the commonly occurring inconsistency of componential models; and the huge number of alternative componential models.

  3. HAMOC: a computer program for fluid hammer analysis

    International Nuclear Information System (INIS)

    Johnson, H.G.

    1975-12-01

    A computer program has been developed for fluid hammer analysis of piping systems attached to a vessel which has undergone a known rapid pressure transient. The program is based on the characteristics method for solution of the partial differential equations of motion and continuity. Column separation logic is included for situations in which pressures fall to saturation values

  4. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    Science.gov (United States)

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  5. Informational-computer system for the neutron spectra analysis

    International Nuclear Information System (INIS)

    Berzonis, M.A.; Bondars, H.Ya.; Lapenas, A.A.

    1979-01-01

    In this article basic principles of the build-up of the informational-computer system for the neutron spectra analysis on a basis of measured reaction rates are given. The basic data files of the system, needed software and hardware for the system operation are described

  6. A Computer Program for Short Circuit Analysis of Electric Power ...

    African Journals Online (AJOL)

    The Short Circuit Analysis Program (SCAP) is to be used to assess the composite effects of unbalanced and balanced faults on the overall reliability of electric power system. The program uses the symmetrical components method to compute all phase and sequence quantities for any bus or branch of a given power network ...

  7. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  8. COALA--A Computational System for Interlanguage Analysis.

    Science.gov (United States)

    Pienemann, Manfred

    1992-01-01

    Describes a linguistic analysis computational system that responds to highly complex queries about morphosyntactic and semantic structures contained in large sets of language acquisition data by identifying, displaying, and analyzing sentences that meet the defined linguistic criteria. (30 references) (Author/CB)

  9. Computer system for environmental sample analysis and data storage and analysis

    International Nuclear Information System (INIS)

    Brauer, F.P.; Fager, J.E.

    1976-01-01

    A mini-computer based environmental sample analysis and data storage system has been developed. The system is used for analytical data acquisition, computation, storage of analytical results, and tabulation of selected or derived results for data analysis, interpretation and reporting. This paper discussed the structure, performance and applications of the system

  10. Building a Prototype of LHC Analysis Oriented Computing Centers

    Science.gov (United States)

    Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.

    2012-12-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  11. Building a Prototype of LHC Analysis Oriented Computing Centers

    International Nuclear Information System (INIS)

    Bagliesi, G; Boccali, T; Della Ricca, G; Donvito, G; Paganoni, M

    2012-01-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  12. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  14. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  15. A Computational Analysis Model for Open-ended Cognitions

    Science.gov (United States)

    Morita, Junya; Miwa, Kazuhisa

    In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.

  16. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  17. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    International Nuclear Information System (INIS)

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  18. Available computer codes and data for radiation transport analysis

    International Nuclear Information System (INIS)

    Trubey, D.K.; Maskewitz, B.F.; Roussin, R.W.

    1975-01-01

    The Radiation Shielding Information Center (RSIC), sponsored and supported by the Energy Research and Development Administration (ERDA) and the Defense Nuclear Agency (DNA), is a technical institute serving the radiation transport and shielding community. It acquires, selects, stores, retrieves, evaluates, analyzes, synthesizes, and disseminates information on shielding and ionizing radiation transport. The major activities include: (1) operating a computer-based information system and answering inquiries on radiation analysis, (2) collecting, checking out, packaging, and distributing large computer codes, and evaluated and processed data libraries. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  19. Computational analysis in support of the SSTO flowpath test

    Science.gov (United States)

    Duncan, Beverly S.; Trefny, Charles J.

    1994-10-01

    A synergistic approach of combining computational methods and experimental measurements is used in the analysis of a hypersonic inlet. There are four major focal points within this study which examine the boundary layer growth on a compression ramp upstream of the cowl lip of a scramjet inlet. Initially, the boundary layer growth on the NASP Concept Demonstrator Engine (CDE) is examined. The follow-up study determines the optimum diverter height required by the SSTO Flowpath test to best duplicate the CDE results. These flow field computations are then compared to the experimental measurements and the mass average Mach number is determined for this inlet.

  20. Automated procedure for performing computer security risk analysis

    International Nuclear Information System (INIS)

    Smith, S.T.; Lim, J.J.

    1984-05-01

    Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures

  1. Computation system for nuclear reactor core analysis. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.

    1977-04-01

    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals.

  2. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  3. Application of computer aided tolerance analysis in product design

    International Nuclear Information System (INIS)

    Du Hua

    2009-01-01

    This paper introduces the shortage of the traditional tolerance design method and the strong point of the computer aided tolerancing (CAT) method,compares the shortage and the strong point among the three tolerance analysis methods, which are Worst Case Analysis, Statistical Analysis and Monte-Carlo Simulation Analysis, and offers the basic courses and correlative details for CAT. As the study objects, the reactor pressure vessel, the core barrel, the hold-down barrel and the support plate are used to upbuild the tolerance simulation model, based on their 3D design models. Then the tolerance simulation analysis has been conducted and the scheme of the tolerance distribution is optimized based on the analysis results. (authors)

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  5. Scalar properties of transversely isotropic tuff from images of orthogonal cross sections

    International Nuclear Information System (INIS)

    Berge, P.A.; Berryman, J.G.; Blair, S.C.; Pena, C.

    1997-01-01

    Image processing methods have been used very effectively to estimate physical properties of isotropic porous earth materials such as sandstones. Anisotropic materials can also be analyzed in order to estimate their physical properties, but additional care and a larger number of well-chosen images of cross sections are required to obtain correct results. Although low-symmetry anisotropic media present difficulties for two-dimensional image processing methods, geologic materials are often transversely isotropic. Scalar properties of porous materials such as porosity and specific surface area can be determined with only minor changes in the analysis when the medium is transversely isotropic rather than isotropic. For example, in a rock that is transitively isotropic due to thin layers or beds, the overall porosity may be obtained by analyzing images of cross sections taken orthogonal to the bedding planes, whereas cross sections lying within the bedding planes will determine only the local porosity of the bed itself. It is known for translationally invariant anisotropic media that the overall specific surface area can be obtained from radial averages of the two-point correlation function in the full three-dimensional volume. Layered materials are not translationally invariant in the direction of the layering, but we show nevertheless how averages of cross sections may be used to obtain the specific surface area for a transversely isotropic rock. We report values of specific surface area obtained for thin sections of Topopah Spring Tuff from Yucca Mountain, Nevada. This formation is being evaluated as a potential host rock for geologic disposal of nuclear waste. Although the present work has made use of thin sections of tuff for the images, the same methods of analysis could also be used to simplify quantitative analysis of three-dimensional volumes of pore structure data obtained by means of x-ray microtomography or other methods, using only a few representative cross

  6. Computational image analysis of Suspension Plasma Sprayed YSZ coatings

    Directory of Open Access Journals (Sweden)

    Michalak Monika

    2017-01-01

    Full Text Available The paper presents the computational studies of microstructure- and topography- related features of suspension plasma sprayed (SPS coatings of yttria-stabilized zirconia (YSZ. The study mainly covers the porosity assessment, provided by ImageJ software analysis. The influence of boundary conditions, defined by: (i circularity and (ii size limits, on the computed values of porosity is also investigated. Additionally, the digital topography evaluation is performed: confocal laser scanning microscope (CLSM and scanning electron microscope (SEM operating in Shape from Shading (SFS mode measure surface roughness of deposited coatings. Computed values of porosity and roughness are referred to the variables of the spraying process, which influence the morphology of coatings and determines the possible fields of their applications.

  7. A comparative analysis of soft computing techniques for gene prediction.

    Science.gov (United States)

    Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand

    2013-07-01

    The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Effects of molecular elongation on liquid crystalline phase behaviour: isotropic-nematic transition

    Science.gov (United States)

    Singh, Ram Chandra; Ram, Jokhan

    2003-08-01

    We present the density-functional approach to study the isotropic-nematic transitions and calculate the values of freezing parameters of the Gay-Berne liquid crystal model, concentrating on the effects of varying the molecular elongation, x0. For this, we have solved the Percus-Yevick integral equation theory to calculate the pair-correlation functions of a fluid the molecules of which interact via a Gay-Berne pair potential. These results have been used in the density-functional theory as an input to locate the isotropic-nematic transition and calculate freezing parameters for a range of length-to-width parameters 3.0⩽ x0⩽4.0 at reduced temperatures 0.95 and 1.25. We observed that as x0 is increased, the isotropic-nematic transition is seen to move to lower density at a given temperature. We find that the density-functional theory is good to study the freezing transitions in such fluids. We have also compared our results with computer simulation results wherever they are available.

  9. Calculation of intensity factors using weight function theory for a transversely isotropic piezoelectric material

    International Nuclear Information System (INIS)

    Son, In Ho; An, Deuk Man

    2012-01-01

    In fracture mechanics, the weight function can be used for calculating stress intensity factors. In this paper, a two dimensional electroelastic analysis is performed on a transversely isotropic piezoelectric material with an open crack. A plane strain formulation of the piezoelectric problem is solved within the Leknitskii formalism. Weight function theory is extended to piezoelectric materials. The stress intensity factors and electric displacement intensity factor are calculated by the weight function theory

  10. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  11. THE ISOTROPIC DIFFUSION SOURCE APPROXIMATION FOR SUPERNOVA NEUTRINO TRANSPORT

    International Nuclear Information System (INIS)

    Liebendoerfer, M.; Whitehouse, S. C.; Fischer, T.

    2009-01-01

    Astrophysical observations originate from matter that interacts with radiation or transported particles. We develop a pragmatic approximation in order to enable multidimensional simulations with basic spectral radiative transfer when the available computational resources are not sufficient to solve the complete Boltzmann transport equation. The distribution function of the transported particles is decomposed into a trapped particle component and a streaming particle component. Their separate evolution equations are coupled by a source term that converts trapped particles into streaming particles. We determine this source term by requiring the correct diffusion limit for the evolution of the trapped particle component. For a smooth transition to the free streaming regime, this 'diffusion source' is limited by the matter emissivity. The resulting streaming particle emission rates are integrated over space to obtain the streaming particle flux. Finally, a geometric estimate of the flux factor is used to convert the particle flux to the streaming particle density, which enters the evaluation of streaming particle-matter interactions. The efficiency of the scheme results from the freedom to use different approximations for each particle component. In supernovae, for example, reactions with trapped particles on fast timescales establish equilibria that reduce the number of primitive variables required to evolve the trapped particle component. On the other hand, a stationary-state approximation considerably facilitates the treatment of the streaming particle component. Different approximations may apply in applications to stellar atmospheres, star formation, or cosmological radiative transfer. We compare the isotropic diffusion source approximation with Boltzmann neutrino transport of electron flavor neutrinos in spherically symmetric supernova models and find good agreement. An extension of the scheme to the multidimensional case is also discussed.

  12. A Research Roadmap for Computation-Based Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  13. A Research Roadmap for Computation-Based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-01-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  14. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  15. Ubiquitous computing in sports: A review and analysis.

    Science.gov (United States)

    Baca, Arnold; Dabnichki, Peter; Heller, Mario; Kornfeind, Philipp

    2009-10-01

    Ubiquitous (pervasive) computing is a term for a synergetic use of sensing, communication and computing. Pervasive use of computing has seen a rapid increase in the current decade. This development has propagated in applied sport science and everyday life. The work presents a survey of recent developments in sport and leisure with emphasis on technology and computational techniques. A detailed analysis on new technological developments is performed. Sensors for position and motion detection, and such for equipment and physiological monitoring are discussed. Aspects of novel trends in communication technologies and data processing are outlined. Computational advancements have started a new trend - development of smart and intelligent systems for a wide range of applications - from model-based posture recognition to context awareness algorithms for nutrition monitoring. Examples particular to coaching and training are discussed. Selected tools for monitoring rules' compliance and automatic decision-making are outlined. Finally, applications in leisure and entertainment are presented, from systems supporting physical activity to systems providing motivation. It is concluded that the emphasis in future will shift from technologies to intelligent systems that allow for enhanced social interaction as efforts need to be made to improve user-friendliness and standardisation of measurement and transmission protocols.

  16. Gas analysis by computer-controlled microwave rotational spectrometry

    International Nuclear Information System (INIS)

    Hrubesh, L.W.

    1978-01-01

    Microwave rotational spectrometry has inherently high resolution and is thus nearly ideal for qualitative gas mixture analysis. Quantitative gas analysis is also possible by a simplified method which utilizes the ease with which molecular rotational transitions can be saturated at low microwave power densities. This article describes a computer-controlled microwave spectrometer which is used to demonstrate for the first time a totally automated analysis of a complex gas mixture. Examples are shown for a complete qualitative and quantitative analysis, in which a search of over 100 different compounds is made in less than 7 min, with sensitivity for most compounds in the 10 to 100 ppm range. This technique is expected to find increased use in view of the reduced complexity and increased reliabiity of microwave spectrometers and because of new energy-related applications for analysis of mixtures of small molecules

  17. Thermohydraulic analysis of nuclear power plant accidents by computer codes

    International Nuclear Information System (INIS)

    Petelin, S.; Stritar, A.; Istenic, R.; Gregoric, M.; Jerele, A.; Mavko, B.

    1982-01-01

    RELAP4/MOD6, BRUCH-D-06, CONTEMPT-LT-28, RELAP5/MOD1 and COBRA-4-1 codes were successful y implemented at the CYBER 172 computer in Ljubljana. Input models of NPP Krsko for the first three codes were prepared. Because of the high computer cost only one analysis of double ended guillotine break of the cold leg of NPP Krsko by RELAP4 code has been done. BRUCH code is easier and cheaper for use. Several analysis have been done. Sensitivity study was performed with CONTEMPT-LT-28 for double ended pump suction break. These codes are intended to be used as a basis for independent safety analyses. (author)

  18. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  19. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  20. [Computers in biomedical research: I. Analysis of bioelectrical signals].

    Science.gov (United States)

    Vivaldi, E A; Maldonado, P

    2001-08-01

    A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.

  1. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    Science.gov (United States)

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.

    2017-04-01

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

  2. Man-machine interfaces analysis system based on computer simulation

    International Nuclear Information System (INIS)

    Chen Xiaoming; Gao Zuying; Zhou Zhiwei; Zhao Bingquan

    2004-01-01

    The paper depicts a software assessment system, Dynamic Interaction Analysis Support (DIAS), based on computer simulation technology for man-machine interfaces (MMI) of a control room. It employs a computer to simulate the operation procedures of operations on man-machine interfaces in a control room, provides quantified assessment, and at the same time carries out analysis on operational error rate of operators by means of techniques for human error rate prediction. The problems of placing man-machine interfaces in a control room and of arranging instruments can be detected from simulation results. DIAS system can provide good technical supports to the design and improvement of man-machine interfaces of the main control room of a nuclear power plant

  3. Computer based approach to fatigue analysis and design

    International Nuclear Information System (INIS)

    Comstock, T.R.; Bernard, T.; Nieb, J.

    1979-01-01

    An approach is presented which uses a mini-computer based system for data acquisition, analysis and graphic displays relative to fatigue life estimation and design. Procedures are developed for identifying an eliminating damaging events due to overall duty cycle, forced vibration and structural dynamic characteristics. Two case histories, weld failures in heavy vehicles and low cycle fan blade failures, are discussed to illustrate the overall approach. (orig.) 891 RW/orig. 892 RKD [de

  4. A Computable OLG Model for Gender and Growth Policy Analysis

    OpenAIRE

    Pierre-Richard Agénor

    2012-01-01

    This paper develops a computable Overlapping Generations (OLG) model for gender and growth policy analysis. The model accounts for human and physical capital accumulation (both public and private), intra- and inter-generational health persistence, fertility choices, and women's time allocation between market work, child rearing, and home production. Bargaining between spouses and gender bias, in the form of discrimination in the work place and mothers' time allocation between daughters and so...

  5. Computers in activation analysis and gamma-ray spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, B. S.; D' Agostino, M. D.; Yule, H. P. [eds.

    1979-01-01

    Seventy-three papers are included under the following session headings: analytical and mathematical methods for data analysis; software systems for ..gamma..-ray and x-ray spectrometry; ..gamma..-ray spectra treatment, peak evaluation; least squares; IAEA intercomparison of methods for processing spectra; computer and calculator utilization in spectrometer systems; and applications in safeguards, fuel scanning, and environmental monitoring. Separate abstracts were prepared for 72 of those papers. (DLC)

  6. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  7. Lagrangian velocity correlations in homogeneous isotropic turbulence

    International Nuclear Information System (INIS)

    Gotoh, T.; Rogallo, R.S.; Herring, J.R.; Kraichnan, R.H.

    1993-01-01

    The Lagrangian velocity autocorrelation and the time correlations for individual wave-number bands are computed by direct numerical simulation (DNS) using the passive vector method (PVM), and the accuracy of the method is studied. It is found that the PVM is accurate when K max /k d ≥2 where K max is the maximum wave number carried in the simulation and k d is the Kolmogorov wave number. The Eulerian and Lagrangian time correlations for various wave-number bands are compared. At moderate to high wave number the Eulerian time correlation decays faster than the Lagrangian, and the effect of sweep on the former is observed. The time scale of the Eulerian correlation is found to be (kU 0 ) -1 while that of the Lagrangian is [∫ 0 k p 2 E(p)dp] -1/2 . The Lagrangian velocity autocorrelation in a frozen turbulent field is computed using the DIA, ALHDIA, and LRA theories and is compared with DNS measurements. The Markovianized Lagrangian renormalized approximation (MLRA) is compared with the DNS, and good agreement is found for one-time quantities in decaying turbulence at low Reynolds numbers and for the Lagrangian velocity autocorrelation in stationary turbulence at moderate Reynolds number. The effect of non-Gaussianity on the Lagrangian correlation predicted by the theories is also discussed

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  11. Trend Analysis of the Brazilian Scientific Production in Computer Science

    Directory of Open Access Journals (Sweden)

    TRUCOLO, C. C.

    2014-12-01

    Full Text Available The growth of scientific information volume and diversity brings new challenges in order to understand the reasons, the process and the real essence that propel this growth. This information can be used as the basis for the development of strategies and public politics to improve the education and innovation services. Trend analysis is one of the steps in this way. In this work, trend analysis of Brazilian scientific production of graduate programs in the computer science area is made to identify the main subjects being studied by these programs in general and individual ways.

  12. A visual interface to computer programs for linkage analysis.

    Science.gov (United States)

    Chapman, C J

    1990-06-01

    This paper describes a visual approach to the input of information about human families into computer data bases, making use of the GEM graphic interface on the Atari ST. Similar approaches could be used on the Apple Macintosh or on the IBM PC AT (to which it has been transferred). For occasional users of pedigree analysis programs, this approach has considerable advantages in ease of use and accessibility. An example of such use might be the analysis of risk in families with Huntington disease using linked RFLPs. However, graphic interfaces do make much greater demands on the programmers of these systems.

  13. Advances in Computational Stability Analysis of Composite Aerospace Structures

    International Nuclear Information System (INIS)

    Degenhardt, R.; Araujo, F. C. de

    2010-01-01

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  14. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  15. Current topics in pure and computational complex analysis

    CERN Document Server

    Dorff, Michael; Lahiri, Indrajit

    2014-01-01

    The book contains 13 articles, some of which are survey articles and others research papers. Written by eminent mathematicians, these articles were presented at the International Workshop on Complex Analysis and Its Applications held at Walchand College of Engineering, Sangli. All the contributing authors are actively engaged in research fields related to the topic of the book. The workshop offered a comprehensive exposition of the recent developments in geometric functions theory, planar harmonic mappings, entire and meromorphic functions and their applications, both theoretical and computational. The recent developments in complex analysis and its applications play a crucial role in research in many disciplines.

  16. ASAS: Computational code for Analysis and Simulation of Atomic Spectra

    Directory of Open Access Journals (Sweden)

    Jhonatha R. dos Santos

    2017-01-01

    Full Text Available The laser isotopic separation process is based on the selective photoionization principle and, because of this, it is necessary to know the absorption spectrum of the desired atom. Computational resource has become indispensable for the planning of experiments and analysis of the acquired data. The ASAS (Analysis and Simulation of Atomic Spectra software presented here is a helpful tool to be used in studies involving atomic spectroscopy. The input for the simulations is friendly and essentially needs a database containing the energy levels and spectral lines of the atoms subjected to be studied.

  17. Critical Data Analysis Precedes Soft Computing Of Medical Data

    DEFF Research Database (Denmark)

    Keyserlingk, Diedrich Graf von; Jantzen, Jan; Berks, G.

    2000-01-01

    extracted. The factors had different relationships (loadings) to the symptoms. Although the factors were gained only by computations, they seemed to express some modular features of the language disturbances. This phenomenon, that factors represent superior aspects of data, is well known in factor analysis...... the deficits in communication. Sets of symptoms corresponding to the traditional symptoms in Broca and Wernicke aphasia may be represented in the factors, but the factor itself does not represent a syndrome. It is assumed that this kind of data analysis shows a new approach to the understanding of language...

  18. Establishment of computer code system for nuclear reactor design - analysis

    International Nuclear Information System (INIS)

    Subki, I.R.; Santoso, B.; Syaukat, A.; Lee, S.M.

    1996-01-01

    Establishment of computer code system for nuclear reactor design analysis is given in this paper. This establishment is an effort to provide the capability in running various codes from nuclear data to reactor design and promote the capability for nuclear reactor design analysis particularly from neutronics and safety points. This establishment is also an effort to enhance the coordination of nuclear codes application and development existing in various research centre in Indonesia. Very prospective results have been obtained with the help of IAEA technical assistance. (author). 6 refs, 1 fig., 1 tab

  19. Analysis and computation of microstructure in finite plasticity

    CERN Document Server

    Hackl, Klaus

    2015-01-01

    This book addresses the need for a fundamental understanding of the physical origin, the mathematical behavior, and the numerical treatment of models which include microstructure. Leading scientists present their efforts involving mathematical analysis, numerical analysis, computational mechanics, material modelling and experiment. The mathematical analyses are based on methods from the calculus of variations, while in the numerical implementation global optimization algorithms play a central role. The modeling covers all length scales, from the atomic structure up to macroscopic samples. The development of the models ware guided by experiments on single and polycrystals, and results will be checked against experimental data.

  20. GOLIA-RK, Structure Stress for Isotropic Materials with Creep and Temperature Fields

    International Nuclear Information System (INIS)

    Donea, J.; Giuliani, S.

    1976-01-01

    1 - Nature of the physical problem solved: Stress analysis of complex structures in presence of creep, dimensional changes and thermal field. Plane stress, plane strain, generalized plane strain and axisymmetric problems can be solved. The material is assumed to be either isotropic or transversely isotropic. Any laws of material behaviour can easily be incorporated by the user (see subroutines WIGNER and CLAW). 2 - Method of solution: Finite element method using triangular elements with linear local fields. The equations for the displacements are solved by Choleski's method. An algorithm is incorporated to calculate automatically the successive time steps in a creep problem. 3 - Restrictions on the complexity of the problem: Maximum number of elements is 700. Maximum number of nodal points is 400. The indexes of two adjacent nodes are not permitted to differ by more than 19

  1. Computer assessment of interview data using latent semantic analysis.

    Science.gov (United States)

    Dam, Gregory; Kaufmann, Stefan

    2008-02-01

    Clinical interviews are a powerful method for assessing students' knowledge and conceptualdevelopment. However, the analysis of the resulting data is time-consuming and can create a "bottleneck" in large-scale studies. This article demonstrates the utility of computational methods in supporting such an analysis. Thirty-four 7th-grade student explanations of the causes of Earth's seasons were assessed using latent semantic analysis (LSA). Analyses were performed on transcriptions of student responses during interviews administered, prior to (n = 21) and after (n = 13) receiving earth science instruction. An instrument that uses LSA technology was developed to identify misconceptions and assess conceptual change in students' thinking. Its accuracy, as determined by comparing its classifications to the independent coding performed by four human raters, reached 90%. Techniques for adapting LSA technology to support the analysis of interview data, as well as some limitations, are discussed.

  2. New Mexico district work-effort analysis computer program

    Science.gov (United States)

    Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.

    1972-01-01

    The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation

  3. Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids

    DEFF Research Database (Denmark)

    Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt

    2014-01-01

    We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...

  4. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  5. 3D geometrically isotropic metamaterial for telecom wavelengths

    DEFF Research Database (Denmark)

    Malureanu, Radu; Andryieuski, Andrei; Lavrinenko, Andrei

    2009-01-01

    of the unit cell is not infinitely small, certain geometrical constraints have to be fulfilled to obtain an isotropic response of the material [3]. These conditions and the metal behaviour close to the plasma frequency increase the design complexity. Our unit cell is composed of two main parts. The first part...... is obtained in a certain bandwidth. The proposed unit cell has the cubic point group of symmetry and being repeatedly placed in space can effectively reveal isotropic optical properties. We use the CST commercial software to characterise the “cube-in-cage” structure. Reflection and transmission spectra...

  6. Anomalies, effective action and Hawking temperatures of a Schwarzschild black hole in the isotropic coordinates

    International Nuclear Information System (INIS)

    Wu Shuangqing; Peng Junjin; Zhao Zhanyue

    2008-01-01

    Motivated by the universality of Hawking radiation and that of the anomaly cancellation technique as well as the effective action method, we investigate the Hawking radiation of a Schwarzschild black hole in the isotropic coordinates via the cancellation of gravitational anomaly. After performing a dimensional reduction from the four-dimensional isotropic Schwarzschild metric, we show that this reduction procedure will, in general, result in two classes of two-dimensional effective metrics: the conformal equivalent and the inequivalent ones. For the physically equivalent class, the two-dimensional effective metric displays such a distinct feature that the determinant is not equal to the unity √(-g)≠1, but also vanishes at the horizon, the latter of which possibly invalidates the anomaly analysis there. Nevertheless, in this paper we adopt the effective action method to prove that the consistent energy-momentum tensor T r t is divergent on the horizon but √(-g)T t r remains finite there. Meanwhile, through an explicit calculation we show that the covariant energy-momentum tensor T-tilde t r equals zero at the horizon. Therefore the validity of the covariant regularity condition that demands that T-tilde t r = 0 at the horizon has been justified, indicating that the gravitational anomaly analysis can be safely extrapolated to the case where the metric determinant vanishes at the horizon. It is then demonstrated that for the physically equivalent reduced metric, both methods can give the correct Hawking temperature of the isotropic Schwarzschild black hole, while for the inequivalent one with the determinant √(-g) = 1 it can only give half of the correct temperature. We further exclude the latter undesired result by taking into account the general covariance of the energy-momentum tensor under the isotropic coordinate transformation

  7. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  8. Evolution of the bonding mechanism of ZnO under isotropic compression: A first-principles study

    International Nuclear Information System (INIS)

    Zhou, G.C.; Sun, L.Z.; Wang, J.B.; Zhong, X.L.; Zhou, Y.C.

    2008-01-01

    The electronic structure and the bonding mechanism of ZnO under isotropic pressure have been studied by using the full-potential linear augmented plane wave (FP-LAPW) method within the density-functional theory (DFT) based on LDA+U exchange correlation (EXC) potential. We used the theory of Atoms in Molecules (AIM) method to analyze the change of the charge transfer and the bonding strength under isotropic pressure. The results of the theoretical analysis show that charge transfer between Zn and O atomic basins nearly linearly increases with the increasing pressure. Charge density along the Zn-O bond increases under the high pressure. The bonding strength and the ionicity of Zn-O bond also increase with the increasing pressure. The linear evolution process of the bonding mechanism under isotropic pressure was shown clearly in the present paper

  9. Analysis of sponge zones for computational fluid mechanics

    International Nuclear Information System (INIS)

    Bodony, Daniel J.

    2006-01-01

    The use of sponge regions, or sponge zones, which add the forcing term -σ(q - q ref ) to the right-hand-side of the governing equations in computational fluid mechanics as an ad hoc boundary treatment is widespread. They are used to absorb and minimize reflections from computational boundaries and as forcing sponges to introduce prescribed disturbances into a calculation. A less common usage is as a means of extending a calculation from a smaller domain into a larger one, such as in computing the far-field sound generated in a localized region. By analogy to the penalty method of finite elements, the method is placed on a solid foundation, complete with estimates of convergence. The analysis generalizes the work of Israeli and Orszag [M. Israeli, S.A. Orszag, Approximation of radiation boundary conditions, J. Comp. Phys. 41 (1981) 115-135] and confirms their findings when applied as a special case to one-dimensional wave propagation in an absorbing sponge. It is found that the rate of convergence of the actual solution to the target solution, with an appropriate norm, is inversely proportional to the sponge strength. A detailed analysis for acoustic wave propagation in one-dimension verifies the convergence rate given by the general theory. The exponential point-wise convergence derived by Israeli and Orszag in the high-frequency limit is recovered and found to hold over all frequencies. A weakly nonlinear analysis of the method when applied to Burgers' equation shows similar convergence properties. Three numerical examples are given to confirm the analysis: the acoustic extension of a two-dimensional time-harmonic point source, the acoustic extension of a three-dimensional initial-value problem of a sound pulse, and the introduction of unstable eigenmodes from linear stability theory into a two-dimensional shear layer

  10. Computer image analysis of etched tracks from ionizing radiation

    Science.gov (United States)

    Blanford, George E.

    1994-01-01

    I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.

  11. Computational analysis of the SRS Phase III salt disposition alternatives

    International Nuclear Information System (INIS)

    Dimenna, R.A.

    2000-01-01

    In late 1997, the In-Tank Precipitation (ITP), facility was shut down and an evaluation of alternative methods to process the liquid high-level waste stored in the Savannah River Site High-Level Waste storage tanks was begun. The objective was to determine whether another process might avoid the operational difficulties encountered with ITP for a lower cost than modifying the existing structured approach to evaluating proposed alternatives on a common basis to identify the best one. Results from the computational analysis were a key part of the input used to select a primary and a secondary salt disposition alternative. This paper describes the process by which the computation needs were identified, addressed, and accomplished with a limited staff under stringent schedule constraints

  12. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  13. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  14. Computer code for general analysis of radon risks (GARR)

    International Nuclear Information System (INIS)

    Ginevan, M.

    1984-09-01

    This document presents a computer model for general analysis of radon risks that allow the user to specify a large number of possible models with a small number of simple commands. The model is written in a version of BASIC which conforms closely to the American National Standards Institute (ANSI) definition for minimal BASIC and thus is readily modified for use on a wide variety of computers and, in particular, microcomputers. Model capabilities include generation of single-year life tables from 5-year abridged data, calculation of multiple-decrement life tables for lung cancer for the general population, smokers, and nonsmokers, and a cohort lung cancer risk calculation that allows specification of level and duration of radon exposure, the form of the risk model, and the specific population assumed at risk. 36 references, 8 figures, 7 tables

  15. The Radiological Safety Analysis Computer Program (RSAC-5) user's manual

    International Nuclear Information System (INIS)

    Wenzel, D.R.

    1994-02-01

    The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user's manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods

  16. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  17. Numerical analysis of boosting scheme for scalable NMR quantum computation

    International Nuclear Information System (INIS)

    SaiToh, Akira; Kitagawa, Masahiro

    2005-01-01

    Among initialization schemes for ensemble quantum computation beginning at thermal equilibrium, the scheme proposed by Schulman and Vazirani [in Proceedings of the 31st ACM Symposium on Theory of Computing (STOC'99) (ACM Press, New York, 1999), pp. 322-329] is known for the simple quantum circuit to redistribute the biases (polarizations) of qubits and small time complexity. However, our numerical simulation shows that the number of qubits initialized by the scheme is rather smaller than expected from the von Neumann entropy because of an increase in the sum of the binary entropies of individual qubits, which indicates a growth in the total classical correlation. This result--namely, that there is such a significant growth in the total binary entropy--disagrees with that of their analysis

  18. Computer aided analysis, simulation and optimisation of thermal sterilisation processes.

    Science.gov (United States)

    Narayanan, C M; Banerjee, Arindam

    2013-04-01

    Although thermal sterilisation is a widely employed industrial process, little work is reported in the available literature including patents on the mathematical analysis and simulation of these processes. In the present work, software packages have been developed for computer aided optimum design of thermal sterilisation processes. Systems involving steam sparging, jacketed heating/cooling, helical coils submerged in agitated vessels and systems that employ external heat exchangers (double pipe, shell and tube and plate exchangers) have been considered. Both batch and continuous operations have been analysed and simulated. The dependence of del factor on system / operating parameters such as mass or volume of substrate to be sterilised per batch, speed of agitation, helix diameter, substrate to steam ratio, rate of substrate circulation through heat exchanger and that through holding tube have been analysed separately for each mode of sterilisation. Axial dispersion in the holding tube has also been adequately accounted for through an appropriately defined axial dispersion coefficient. The effect of exchanger characteristics/specifications on the system performance has also been analysed. The multiparameter computer aided design (CAD) software packages prepared are thus highly versatile in nature and they permit to make the most optimum choice of operating variables for the processes selected. The computed results have been compared with extensive data collected from a number of industries (distilleries, food processing and pharmaceutical industries) and pilot plants and satisfactory agreement has been observed between the two, thereby ascertaining the accuracy of the CAD softwares developed. No simplifying assumptions have been made during the analysis and the design of associated heating / cooling equipment has been performed utilising the most updated design correlations and computer softwares.

  19. Analysis of pellet coating uniformity using a computer scanner.

    Science.gov (United States)

    Šibanc, Rok; Luštrik, Matevž; Dreu, Rok

    2017-11-30

    A fast method for pellet coating uniformity analysis, using a commercial computer scanner was developed. The analysis of the individual particle coating thicknesses was based on using a transparent orange colored coating layer deposited on white pellet cores. Besides the analysis of the coating thickness the information of pellet size and shape was obtained as well. Particle size dependent coating thickness and particle size independent coating variability was calculated by combining the information of coating thickness and pellet size. Decoupling coating thickness variation sources is unique to presented method. For each coating experiment around 10000 pellets were analyzed, giving results with a high statistical confidence. Proposed method was employed for the performance evaluation of classical Wurster and swirl enhanced Wurster coater operated at different gap settings and air flow rates. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Quantitative analysis by computer controlled X-ray fluorescence spectrometer

    International Nuclear Information System (INIS)

    Balasubramanian, T.V.; Angelo, P.C.

    1981-01-01

    X-ray fluorescence spectroscopy has become a widely accepted method in the metallurgical field for analysis of both minor and major elements. As encountered in many other analytical techniques, the problem of matrix effect generally known as the interelemental effects is to be dealt with effectively in order to make the analysis accurate. There are several methods by which the effects of matrix on the analyte are minimised or corrected for and the mathematical correction is one among them. In this method the characteristic secondary X-ray intensities are measured from standard samples and correction coefficients. If any, for interelemental effects are evaluated by mathematical calculations. This paper describes attempts to evaluate the correction coefficients for interelemental effects by multiple linear regression programmes using a computer for the quantitative analysis of stainless steel and a nickel base cast alloy. The quantitative results obtained using this method for a standard stainless steel sample are compared with the given certified values. (author)

  1. Overview of adaptive finite element analysis in computational geodynamics

    Science.gov (United States)

    May, D. A.; Schellart, W. P.; Moresi, L.

    2013-10-01

    The use of numerical models to develop insight and intuition into the dynamics of the Earth over geological time scales is a firmly established practice in the geodynamics community. As our depth of understanding grows, and hand-in-hand with improvements in analytical techniques and higher resolution remote sensing of the physical structure and state of the Earth, there is a continual need to develop more efficient, accurate and reliable numerical techniques. This is necessary to ensure that we can meet the challenge of generating robust conclusions, interpretations and predictions from improved observations. In adaptive numerical methods, the desire is generally to maximise the quality of the numerical solution for a given amount of computational effort. Neither of these terms has a unique, universal definition, but typically there is a trade off between the number of unknowns we can calculate to obtain a more accurate representation of the Earth, and the resources (time and computational memory) required to compute them. In the engineering community, this topic has been extensively examined using the adaptive finite element (AFE) method. Recently, the applicability of this technique to geodynamic processes has started to be explored. In this review we report on the current status and usage of spatially adaptive finite element analysis in the field of geodynamics. The objective of this review is to provide a brief introduction to the area of spatially adaptive finite analysis, including a summary of different techniques to define spatial adaptation and of different approaches to guide the adaptive process in order to control the discretisation error inherent within the numerical solution. An overview of the current state of the art in adaptive modelling in geodynamics is provided, together with a discussion pertaining to the issues related to using adaptive analysis techniques and perspectives for future research in this area. Additionally, we also provide a

  2. SHEAT for PC. A computer code for probabilistic seismic hazard analysis for personal computer, user's manual

    International Nuclear Information System (INIS)

    Yamada, Hiroyuki; Tsutsumi, Hideaki; Ebisawa, Katsumi; Suzuki, Masahide

    2002-03-01

    The SHEAT code developed at Japan Atomic Energy Research Institute is for probabilistic seismic hazard analysis which is one of the tasks needed for seismic Probabilistic Safety Assessment (PSA) of a nuclear power plant. At first, SHEAT was developed as the large sized computer version. In addition, a personal computer version was provided to improve operation efficiency and generality of this code in 2001. It is possible to perform the earthquake hazard analysis, display and the print functions with the Graphical User Interface. With the SHEAT for PC code, seismic hazard which is defined as an annual exceedance frequency of occurrence of earthquake ground motions at various levels of intensity at a given site is calculated by the following two steps as is done with the large sized computer. One is the modeling of earthquake generation around a site. Future earthquake generation (locations, magnitudes and frequencies of postulated earthquake) is modeled based on the historical earthquake records, active fault data and expert judgment. Another is the calculation of probabilistic seismic hazard at the site. An earthquake ground motion is calculated for each postulated earthquake using an attenuation model taking into account its standard deviation. Then the seismic hazard at the site is calculated by summing the frequencies of ground motions by all the earthquakes. This document is the user's manual of the SHEAT for PC code. It includes: (1) Outline of the code, which include overall concept, logical process, code structure, data file used and special characteristics of code, (2) Functions of subprogram and analytical models in them, (3) Guidance of input and output data, (4) Sample run result, and (5) Operational manual. (author)

  3. Quasi-Rayleigh waves in transversely isotropic half-space with inclined axis of symmetry

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.; Savina, L.S.

    2003-09-01

    A method for determination of characteristics of quasi-Rayleigh (qR) wave in a transversely isotropic homogeneous half-space with inclined axis of symmetry is outlined. The solution is obtained as a superposition of qP, qSV and qSH waves, and surface wave velocity is determined from the boundary conditions at the free surface and at infinity, as in the case of Rayleigh wave in isotropic half-space. Though the theory is simple enough, a numerical procedure for the calculation of surface wave velocity presents some difficulties. The difficulty is conditioned by necessity to calculate complex roots of a non-linear equation, which in turn contains functions determined as roots of nonlinear equations with complex coefficients. Numerical analysis shows that roots of the equation corresponding to the boundary conditions do not exist in the whole domain of azimuths and inclinations of the symmetry axis. The domain of existence of qR wave depends on the ratio of the elastic parameters: for some strongly anisotropic models the wave cannot exist at all. For some angles of inclination qR wave velocities deviate from those calculated on the basis of the perturbation method valid for weak anisotropy, though they have the same tendency of variation with azimuth. The phase of qR wave varies with depth unlike Rayleigh wave in isotropic half-space. Unlike Rayleigh wave in isotropic half-space, qR wave has three components - vertical, radial and transverse. Particle motion in horizontal plane is elliptic. Direction of the major axis of the ellipsis coincide with the direction of propagation only in azimuths 0 deg. (180 deg.) and 90 deg. (270 deg.). (author)

  4. Computational Fatigue Life Analysis of Carbon Fiber Laminate

    Science.gov (United States)

    Shastry, Shrimukhi G.; Chandrashekara, C. V., Dr.

    2018-02-01

    In the present scenario, many traditional materials are being replaced by composite materials for its light weight and high strength properties. Industries like automotive industry, aerospace industry etc., are some of the examples which uses composite materials for most of its components. Replacing of components which are subjected to static load or impact load are less challenging compared to components which are subjected to dynamic loading. Replacing the components made up of composite materials demands many stages of parametric study. One such parametric study is the fatigue analysis of composite material. This paper focuses on the fatigue life analysis of the composite material by using computational techniques. A composite plate is considered for the study which has a hole at the center. The analysis is carried on (0°/90°/90°/90°/90°)s laminate sequence and (45°/-45°)2s laminate sequence by using a computer script. The life cycles for both the lay-up sequence are compared with each other. It is observed that, for the same material and geometry of the component, cross ply laminates show better fatigue life than that of angled ply laminates.

  5. Lagrangian statistics of particle pairs in homogeneous isotropic turbulence

    NARCIS (Netherlands)

    Biferale, L.; Boffeta, G.; Celani, A.; Devenish, B.J.; Lanotte, A.; Toschi, F.

    2005-01-01

    We present a detailed investigation of the particle pair separation process in homogeneous isotropic turbulence. We use data from direct numerical simulations up to R????280 following the evolution of about two million passive tracers advected by the flow over a time span of about three decades. We

  6. Reconstruction of atomic effective potentials from isotropic scattering factors

    International Nuclear Information System (INIS)

    Romera, E.; Angulo, J.C.; Torres, J.J.

    2002-01-01

    We present a method for the approximate determination of one-electron effective potentials of many-electron systems from a finite number of values of the isotropic scattering factor. The method is based on the minimum cross-entropy technique. An application to some neutral ground-state atomic systems has been done within a Hartree-Fock framework

  7. Geometry of the isotropic oscillator driven by the conformal mode

    Energy Technology Data Exchange (ETDEWEB)

    Galajinsky, Anton [Tomsk Polytechnic University, School of Physics, Tomsk (Russian Federation)

    2018-01-15

    Geometrization of a Lagrangian conservative system typically amounts to reformulating its equations of motion as the geodesic equations in a properly chosen curved spacetime. The conventional methods include the Jacobi metric and the Eisenhart lift. In this work, a modification of the Eisenhart lift is proposed which describes the isotropic oscillator in arbitrary dimension driven by the one-dimensional conformal mode. (orig.)

  8. Seeing is believing : communication performance under isotropic teleconferencing conditions

    NARCIS (Netherlands)

    Werkhoven, P.J.; Schraagen, J.M.C.; Punte, P.A.J.

    2001-01-01

    The visual component of conversational media such as videoconferencing systems communicates important non-verbal information such as facial expressions, gestures, posture and gaze. Unlike the other cues, selective gaze depends critically on the configuration of cameras and monitors. Under isotropic

  9. A simple mechanical model for the isotropic harmonic oscillator

    International Nuclear Information System (INIS)

    Nita, Gelu M

    2010-01-01

    A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels.

  10. Homogenization and isotropization of an inflationary cosmological model

    International Nuclear Information System (INIS)

    Barrow, J.D.; Groen, Oe.; Oslo Univ.

    1986-01-01

    A member of the class of anisotropic and inhomogeneous cosmological models constructed by Wainwright and Goode is investigated. It is shown to describe a universe containing a scalar field which is minimally coupled to gravitation and a positive cosmological constant. It is shown that this cosmological model evolves exponentially rapidly towards the homogeneous and isotropic de Sitter universe model. (orig.)

  11. Isotropic gates in large gamma detector arrays versus angular distributions

    International Nuclear Information System (INIS)

    Iacob, V.E.; Duchene, G.

    1997-01-01

    The quality of the angular distribution information extracted from high-fold gamma-gamma coincidence events is analyzed. It is shown that a correct quasi-isotropic gate setting, available at the modern large gamma-ray detector arrays, essentially preserves the quality of the angular information. (orig.)

  12. Higher gradient expansion for linear isotropic peridynamic materials

    Czech Academy of Sciences Publication Activity Database

    Šilhavý, Miroslav

    2017-01-01

    Roč. 22, č. 6 (2017), s. 1483-1493 ISSN 1081-2865 Institutional support: RVO:67985840 Keywords : peridynamics * higher-grade theories * non-local elastic-material model * representation theorems for isotropic functions Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 2.953, year: 2016 http://journals.sagepub.com/doi/10.1177/1081286516637235

  13. Higher gradient expansion for linear isotropic peridynamic materials

    Czech Academy of Sciences Publication Activity Database

    Šilhavý, Miroslav

    2017-01-01

    Roč. 22, č. 6 (2017), s. 1483-1493 ISSN 1081-2865 Institutional support: RVO:67985840 Keywords : peridynamics * higher-grade theories * non-local elastic-material model * representation theorems for isotropic functions Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 2.953, year: 2016 http:// journals .sagepub.com/doi/10.1177/1081286516637235

  14. Transformation optics, isotropic chiral media and non-Riemannian geometry

    International Nuclear Information System (INIS)

    Horsley, S A R

    2011-01-01

    The geometrical interpretation of electromagnetism in transparent media (transformation optics) is extended to include chiral media that are isotropic but inhomogeneous. It was found that such media may be described through introducing the non-Riemannian geometrical property of torsion into the Maxwell equations, and it is shown how such an interpretation may be applied to the design of optical devices.

  15. Isotropic cosmic expansion and the Rubin-Ford effect

    International Nuclear Information System (INIS)

    Fall, S.M.; Jones, B.J.T.

    1976-01-01

    It is shown that the Rubin-Ford data (Astrophys. J. Lett. 183:L111 (1973)), often taken as evidence for large scale anisotropic cosmic expansion, probably only reflect the inhomogeneous distribution of galaxies in the region of the sample. The data presented are consistent with isotropic expansion, an unperturbed galaxy velocity field, and hence a low density Universe. (author)

  16. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  18. Computer content analysis of schizophrenic speech: a preliminary report.

    Science.gov (United States)

    Tucker, G J; Rosenberg, S D

    1975-06-01

    Computer analysis significantly differtiated the thermatic content of the free speech of 10 schizophrenic patients from that of 10 nonschizophrenic patients and from the content of transcripts of dream material from 10 normal subjects. Schizophrenic patients used the thematic categories in factor 1 (the "schizophrenic factor") 3 times more frequently than the nonschizophrenics and 10 times more frequently than the normal subjects (p smaller than 01). In general, the language content of the schizophrenic patient mirrored an almost agitated attempt to locate oneself in time and space and to defend against internal discomfort and confusion. The authors discuss the implications of this study for future research.

  19. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  20. Integrated computer codes for nuclear power plant severe accident analysis

    International Nuclear Information System (INIS)

    Jordanov, I.; Khristov, Y.

    1995-01-01

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs

  1. Integrated computer codes for nuclear power plant severe accident analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jordanov, I; Khristov, Y [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika

    1996-12-31

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs.

  2. RADTRAN 5: A computer code for transportation risk analysis

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Kanipe, F.L.

    1991-01-01

    RADTRAN 5 is a computer code developed at Sandia National Laboratories (SNL) in Albuquerque, NM, to estimate radiological and nonradiological risks of radioactive materials transportation. RADTRAN 5 is written in ANSI Standard FORTRAN 77 and contains significant advances in the methodology for route-specific analysis first developed by SNL for RADTRAN 4 (Neuhauser and Kanipe, 1992). Like the previous RADTRAN codes, RADTRAN 5 contains two major modules for incident-free and accident risk amlysis, respectively. All commercially important transportation modes may be analyzed with RADTRAN 5: highway by combination truck; highway by light-duty vehicle; rail; barge; ocean-going ship; cargo air; and passenger air

  3. Computer Tomography Analysis of Fastrac Composite Thrust Chamber Assemblies

    Science.gov (United States)

    Beshears, Ronald D.

    2000-01-01

    Computed tomography (CT) inspection has been integrated into the production process for NASA's Fastrac composite thrust chamber assemblies (TCAs). CT has been proven to be uniquely qualified to detect the known critical flaw for these nozzles, liner cracks that are adjacent to debonds between the liner and overwrap. CT is also being used as a process monitoring tool through analysis of low density indications in the nozzle overwraps. 3d reconstruction of CT images to produce models of flawed areas is being used to give program engineers better insight into the location and nature of nozzle flaws.

  4. Development validation and use of computer codes for inelastic analysis

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    A finite element scheme is a system which provides routines so carry out the operations which are common to all finite element programs. The list of items that can be provided as standard by the finite element scheme is surprisingly large and the list provided by the UNCLE finite element scheme is unusually comprehensive. This presentation covers the following: construction of the program, setting up a finite element mesh, generation of coordinates, incorporating boundary and load conditions. Program validation was done by creep calculations performed using CAUSE code. Program use is illustrated by calculating a typical inelastic analysis problem. This includes computer model of the PFR intermediate heat exchanger

  5. DYNAPO 4 - a fluid system and frames analysis computer program

    International Nuclear Information System (INIS)

    Lefter, J.D.; Ahdout, H.

    1982-01-01

    DYNAPO 4 is a user oriented specialized computer program, capable of analyzing three-dimensional linear elastic piping systems or frames for static loads, dynamic loads represented by acceleration response spectra, transient dynamic loads represented by harmonic, polynomial of second order, and time history forcing functions. DYNAPO 4 has plotting capability, which plots the input configuration of the piping system or of the structure and also plots its deformed shape after the load is applied. DYNAPO 4 performs the analysis for ASME Section III Class 1, Class 2, and 3, piping, and provides the user with stress reports as per ASME and ANSI Code requirements. 3 refs

  6. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  7. A computer program for automatic gamma-ray spectra analysis

    International Nuclear Information System (INIS)

    Hiromura, Kazuyuki

    1975-01-01

    A computer program for automatic analysis of gamma-ray spectra obtained with a Ge(Li) detector is presented. The program includes a method by comparing the successive values of experimental data for the automatic peak finding and method of leastsquares for the peak fitting. The peak shape in the fitting routine is a 'modified Gaussian', which consists of two different Gaussians with the same height joined at the centroid. A quadratic form is chosen as a function representing the background. A maximum of four peaks can be treated in the fitting routine by the program. Some improvements in question are described. (auth.)

  8. MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.

    Science.gov (United States)

    Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro

    2018-06-01

    The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.

  9. Modern EMC analysis I time-domain computational schemes

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  11. Cepstrum analysis and applications to computational fluid dynamic solutions

    Science.gov (United States)

    Meadows, Kristine R.

    1990-04-01

    A novel approach to the problem of spurious reflections introduced by artificial boundary conditions in computational fluid dynamic (CFD) solutions is proposed. Instead of attempting to derive non-reflecting boundary conditions, the approach is to accept the fact that spurious reflections occur, but to remove these reflections with cepstrum analysis, a signal processing technique which has been successfully used to remove echoes from experimental data. First, the theory of the cepstrum method is presented. This includes presentation of two types of cepstra: The Power Cepstrum and the Complex Cepstrum. The definitions of the cepstrum methods are applied theoretically and numerically to the analytical solution of sinusoidal plane wave propagation in a duct. One-D and 3-D time dependent solutions to the Euler equations are computed, and hard-wall conditions are prescribed at the numerical boundaries. The cepstrum method is applied, and the reflections from the boundaries are removed from the solutions. One-D and 3-D solutions are computed with so called nonreflecting boundary conditions, and these solutions are compared to those obtained by prescribing hard wall conditions and processing with the cepstrum.

  12. G-computation demonstration in causal mediation analysis

    International Nuclear Information System (INIS)

    Wang, Aolin; Arah, Onyebuchi A.

    2015-01-01

    Recent work has considerably advanced the definition, identification and estimation of controlled direct, and natural direct and indirect effects in causal mediation analysis. Despite the various estimation methods and statistical routines being developed, a unified approach for effect estimation under different effect decomposition scenarios is still needed for epidemiologic research. G-computation offers such unification and has been used for total effect and joint controlled direct effect estimation settings, involving different types of exposure and outcome variables. In this study, we demonstrate the utility of parametric g-computation in estimating various components of the total effect, including (1) natural direct and indirect effects, (2) standard and stochastic controlled direct effects, and (3) reference and mediated interaction effects, using Monte Carlo simulations in standard statistical software. For each study subject, we estimated their nested potential outcomes corresponding to the (mediated) effects of an intervention on the exposure wherein the mediator was allowed to attain the value it would have under a possible counterfactual exposure intervention, under a pre-specified distribution of the mediator independent of any causes, or under a fixed controlled value. A final regression of the potential outcome on the exposure intervention variable was used to compute point estimates and bootstrap was used to obtain confidence intervals. Through contrasting different potential outcomes, this analytical framework provides an intuitive way of estimating effects under the recently introduced 3- and 4-way effect decomposition. This framework can be extended to complex multivariable and longitudinal mediation settings

  13. Isotropic fractionation applied to studies on evapotranspiration

    International Nuclear Information System (INIS)

    Leopoldo, P.R.; Sousa, A.P.; Salati, E.

    1980-01-01

    To check the possibility of using the stable isotopes Deuterium and Oxygen-18 in research on the water dynamics of the soil-plant-atmosphere system, the variation in content of these isotopes was studied in plant-water. Results have indicated that enrichment of heavy elements in water from leaves is directly proportional to the atmosphere temperature and inversely proportional to the relative humidity. It also became evident that, through correlation between Δ sub(D) and Δ 18 values obtained for leaf-water, the variation in water requirement of a plant can be established in relation to the different growth stages. Variations that occur in water flow in the soil-plant-atmosphere system can also be determined using the same correlation, but as a function of the different hours of the day. In the present study, these correlations showed that the maximum water requirement occurred between 12:00 and 16:00h and that between 16:00 and 20:00h the water lost by the plant during the day began to be replaced until the dynamic equilibrium of the plant was reached. Further studies using the above method need to be carried out aimed at quantitative analysis of water-losing process. (Author) [pt

  14. System and software safety analysis for the ERA control computer

    International Nuclear Information System (INIS)

    Beerthuizen, P.G.; Kruidhof, W.

    2001-01-01

    The European Robotic Arm (ERA) is a seven degrees of freedom relocatable anthropomorphic robotic manipulator system, to be used in manned space operation on the International Space Station, supporting the assembly and external servicing of the Russian segment. The safety design concept and implementation of the ERA is described, in particular with respect to the central computer's software design. A top-down analysis and specification process is used to down flow the safety aspects of the ERA system towards the subsystems, which are produced by a consortium of companies in many countries. The user requirements documents and the critical function list are the key documents in this process. Bottom-up analysis (FMECA) and test, on both subsystem and system level, are the basis for safety verification. A number of examples show the use of the approach and methods used

  15. Intra-articualr calcaneal fractures: Computed tomographic analysis

    International Nuclear Information System (INIS)

    Rosenberg, Z.S.; Feldman, F.; Singson, R.D.

    1987-01-01

    Computed tomography (CT) analysis of 21 intra-articular calcaneal fractures categorized according to the Essex-Lopresti classification revealed the following distribution: joint depression-type 57%, comminuted type 43%, tongue-type 0%. The posterior calcaneal facet was fractured and/or depressed in 100% of the cases while the medial facet was involved in only 25% of the cases. CT proved superior to plain films by consistently demonstrating additional fracture components within each major category suggesting subclassifications which have potential prognostic value. CT allowed more expeditious handling of acutely injured patients, and improved preoperative planning, postoperative follow-up, and detailed analysis of causes for chronic residual pain. CT further identified significant soft tissue injuries such as peroneal tendon displacement which cannot be delineated on plain films. (orig.)

  16. GUI program to compute probabilistic seismic hazard analysis

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Chi, H. C.; Cho, J. C.; Park, J. H.; Kim, K. G.; Im, I. S.

    2005-12-01

    The first stage of development of program to compute probabilistic seismic hazard is completed based on Graphic User Interface (GUI). The main program consists of three part - the data input processes, probabilistic seismic hazard analysis and result output processes. The first part has developed and others are developing now in this term. The probabilistic seismic hazard analysis needs various input data which represent attenuation formulae, seismic zoning map, and earthquake event catalog. The input procedure of previous programs based on text interface take a much time to prepare the data. The data cannot be checked directly on screen to prevent input erroneously in existing methods. The new program simplifies the input process and enable to check the data graphically in order to minimize the artificial error within the limits of the possibility

  17. Computer-Aided Sustainable Process Synthesis-Design and Analysis

    DEFF Research Database (Denmark)

    Kumar Tula, Anjan

    -groups is that, the performance of the entire process can be evaluated from the contributions of the individual process-groups towards the selected flowsheet property (for example, energy consumed). The developed flowsheet property models include energy consumption, carbon footprint, product recovery, product......Process synthesis involves the investigation of chemical reactions needed to produce the desired product, selection of the separation techniques needed for downstream processing, as well as taking decisions on sequencing the involved separation operations. For an effective, efficient and flexible...... focuses on the development and application of a computer-aided framework for sustainable synthesis-design and analysis of process flowsheets by generating feasible alternatives covering the entire search space and includes analysis tools for sustainability, LCA and economics. The synthesis method is based...

  18. Computational Approaches for Integrative Analysis of the Metabolome and Microbiome

    Directory of Open Access Journals (Sweden)

    Jasmine Chong

    2017-11-01

    Full Text Available The study of the microbiome, the totality of all microbes inhabiting the host or an environmental niche, has experienced exponential growth over the past few years. The microbiome contributes functional genes and metabolites, and is an important factor for maintaining health. In this context, metabolomics is increasingly applied to complement sequencing-based approaches (marker genes or shotgun metagenomics to enable resolution of microbiome-conferred functionalities associated with health. However, analyzing the resulting multi-omics data remains a significant challenge in current microbiome studies. In this review, we provide an overview of different computational approaches that have been used in recent years for integrative analysis of metabolome and microbiome data, ranging from statistical correlation analysis to metabolic network-based modeling approaches. Throughout the process, we strive to present a unified conceptual framework for multi-omics integration and interpretation, as well as point out potential future directions.

  19. GUI program to compute probabilistic seismic hazard analysis

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Chi, H. C.; Cho, J. C.; Park, J. H.; Kim, K. G.; Im, I. S.

    2006-12-01

    The development of program to compute probabilistic seismic hazard is completed based on Graphic User Interface(GUI). The main program consists of three part - the data input processes, probabilistic seismic hazard analysis and result output processes. The probabilistic seismic hazard analysis needs various input data which represent attenuation formulae, seismic zoning map, and earthquake event catalog. The input procedure of previous programs based on text interface take a much time to prepare the data. The data cannot be checked directly on screen to prevent input erroneously in existing methods. The new program simplifies the input process and enable to check the data graphically in order to minimize the artificial error within limits of the possibility

  20. Development Of The Computer Code For Comparative Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Purwadi, Mohammad Dhandhang

    2001-01-01

    The qualitative and quantitative chemical analysis with Neutron Activation Analysis (NAA) is an importance utilization of a nuclear research reactor, and this should be accelerated and promoted in application and its development to raise the utilization of the reactor. The application of Comparative NAA technique in GA Siwabessy Multi Purpose Reactor (RSG-GAS) needs special (not commercially available yet) soft wares for analyzing the spectrum of multiple elements in the analysis at once. The application carried out using a single spectrum software analyzer, and comparing each result manually. This method really degrades the quality of the analysis significantly. To solve the problem, a computer code was designed and developed for comparative NAA. Spectrum analysis in the code is carried out using a non-linear fitting method. Before the spectrum analyzed, it was passed to the numerical filter which improves the signal to noise ratio to do the deconvolution operation. The software was developed using the G language and named as PASAN-K The testing result of the developed software was benchmark with the IAEA spectrum and well operated with less than 10 % deviation

  1. Analysis of CERN computing infrastructure and monitoring data

    Science.gov (United States)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  2. Computer-aided analysis of cutting processes for brittle materials

    Science.gov (United States)

    Ogorodnikov, A. I.; Tikhonov, I. N.

    2017-12-01

    This paper is focused on 3D computer simulation of cutting processes for brittle materials and silicon wafers. Computer-aided analysis of wafer scribing and dicing is carried out with the use of the ANSYS CAE (computer-aided engineering) software, and a parametric model of the processes is created by means of the internal ANSYS APDL programming language. Different types of tool tip geometry are analyzed to obtain internal stresses, such as a four-sided pyramid with an included angle of 120° and a tool inclination angle to the normal axis of 15°. The quality of the workpieces after cutting is studied by optical microscopy to verify the FE (finite-element) model. The disruption of the material structure during scribing occurs near the scratch and propagates into the wafer or over its surface at a short range. The deformation area along the scratch looks like a ragged band, but the stress width is rather low. The theory of cutting brittle semiconductor and optical materials is developed on the basis of the advanced theory of metal turning. The fall of stress intensity along the normal on the way from the tip point to the scribe line can be predicted using the developed theory and with the verified FE model. The crystal quality and dimensions of defects are determined by the mechanics of scratching, which depends on the shape of the diamond tip, the scratching direction, the velocity of the cutting tool and applied force loads. The disunity is a rate-sensitive process, and it depends on the cutting thickness. The application of numerical techniques, such as FE analysis, to cutting problems enhances understanding and promotes the further development of existing machining technologies.

  3. Numerical implementation of a transverse-isotropic inelastic, work-hardening constitutive model

    International Nuclear Information System (INIS)

    Baladi, G.Y.

    1977-01-01

    This paper documents the numerical implementation of a model, specifically a transverse-isotropic, inelastic, work-hardening constitutive model. A brief overview of the mathematical formulation of the model is presented to facilitate the understanding of its numerical implementation. The model is based on incremental flow theories for materials which have time- and temperature-independent properties and which are capable of undergoing small plastic as well as small elastic strain at each loading increment. In addition, the model is written in terms of 'pseudo' stress invariants so that the incremental anisotropic stress-strain relationship can be readily incorporated into existing finite-difference or finite-element computer codes. The isotropic version of the model is retrieved without any changes in the mathematical formulation or in the numerical implementation (algorithm) of the model. Various methods exist for incorporating inelastic constitutive models into computer programs. The method presented in this paper is appropriate for both finite-difference and finite-element codes, and is applicable for solving static as wall as dynamic problems. This method expresses the material constitutive properties as a matrix of coefficients, C (generalized tangent moduli), which relates incremental stresses to incremental strains. It possesses desirable convergence properties. In either finite-difference or finite-element applications the input quantities are the initial stress components, obtained at the end of the previous strain increment, and the new strain increments. The output quantities are the new values of the stress components

  4. Compendium of computer codes for the safety analysis of LMFBR's

    International Nuclear Information System (INIS)

    1975-06-01

    A high level of mathematical sophistication is required in the safety analysis of LMFBR's to adequately meet the demands for realism and confidence in all areas of accident consequence evaluation. The numerical solution procedures associated with these analyses are generally so complex and time consuming as to necessitate their programming into computer codes. These computer codes have become extremely powerful tools for safety analysis, combining unique advantages in accuracy, speed and cost. The number, diversity and complexity of LMFBR safety codes in the U. S. has grown rapidly in recent years. It is estimated that over 100 such codes exist in various stages of development throughout the country. It is inevitable that such a large assortment of codes will require rigorous cataloguing and abstracting to aid individuals in identifying what is available. It is the purpose of this compendium to provide such a service through the compilation of code summaries which describe and clarify the status of domestic LMFBR safety codes. (U.S.)

  5. Computational design analysis for deployment of cardiovascular stents

    International Nuclear Information System (INIS)

    Tammareddi, Sriram; Sun Guangyong; Li Qing

    2010-01-01

    Cardiovascular disease has become a major global healthcare problem. As one of the relatively new medical devices, stents offer a minimally-invasive surgical strategy to improve the quality of life for numerous cardiovascular disease patients. One of the key associative issues has been to understand the effect of stent structures on its deployment behaviour. This paper aims to develop a computational model for exploring the biomechanical responses to the change in stent geometrical parameters, namely the strut thickness and cross-link width of the Palmaz-Schatz stent. Explicit 3D dynamic finite element analysis was carried out to explore the sensitivity of these geometrical parameters on deployment performance, such as dog-boning, fore-shortening, and stent deformation over the load cycle. It has been found that an increase in stent thickness causes a sizeable rise in the load required to deform the stent to its target diameter, whilst reducing maximum dog-boning in the stent. An increase in the cross-link width showed that no change in the load is required to deform the stent to its target diameter, and there is no apparent correlation with dog-boning but an increased fore-shortening with increasing cross-link width. The computational modelling and analysis presented herein proves an effective way to refine or optimise the design of stent structures.

  6. Computer enhanced release scenario analysis for a nuclear waste repository

    International Nuclear Information System (INIS)

    Stottlemyre, J.A.; Petrie, G.M.; Mullen, M.F.

    1979-01-01

    An interactive (user-oriented) computer tool is being developed at PNL to assist in the analysis of release scenarios for long-term safety assessment of a continental geologic nuclear waste repository. Emphasis is on characterizing the various ways the geologic and hydrologic system surrounding a repository might vary over the 10 6 to 10 7 years subsequent to final closure of the cavern. The potential disruptive phenomena are categorized as natural geologic and man-caused and tend to be synergistic in nature. The computer tool is designed to permit simulation of the system response as a function of the ongoing disruptive phenomena and time. It is designed to be operated in a determinatic manner, i.e., user selection of the desired scenarios and associated rate, magnitude, and lag time data; or in a stochastic mode. The stochastic mode involves establishing distributions for individual phenomena occurrence probabilities, rates, magnitudes, and phase relationships. A Monte-Carlo technique is then employed to generate a multitude of disruptive event scenarios, scan for breaches of the repository isolation, and develop input to the release consequence analysis task. To date, only a simplified one-dimensional version of the code has been completed. Significant modification and development is required to expand its dimensionality and apply the tool to any specific site

  7. A compendium of computer codes in fault tree analysis

    International Nuclear Information System (INIS)

    Lydell, B.

    1981-03-01

    In the past ten years principles and methods for a unified system reliability and safety analysis have been developed. Fault tree techniques serve as a central feature of unified system analysis, and there exists a specific discipline within system reliability concerned with the theoretical aspects of fault tree evaluation. Ever since the fault tree concept was established, computer codes have been developed for qualitative and quantitative analyses. In particular the presentation of the kinetic tree theory and the PREP-KITT code package has influenced the present use of fault trees and the development of new computer codes. This report is a compilation of some of the better known fault tree codes in use in system reliability. Numerous codes are available and new codes are continuously being developed. The report is designed to address the specific characteristics of each code listed. A review of the theoretical aspects of fault tree evaluation is presented in an introductory chapter, the purpose of which is to give a framework for the validity of the different codes. (Auth.)

  8. Automatic quantitative analysis of liver functions by a computer system

    International Nuclear Information System (INIS)

    Shinpo, Takako

    1984-01-01

    In the previous paper, we confirmed the clinical usefulness of hepatic clearance (hepatic blood flow), which is the hepatic uptake and blood disappearance rate coefficients. These were obtained by the initial slope index of each minute during a period of five frames of a hepatogram by injecting sup(99m)Tc-Sn-colloid 37 MBq. To analyze the information simply, rapidly and accurately, we developed a automatic quantitative analysis for liver functions. Information was obtained every quarter minute during a period of 60 frames of the sequential image. The sequential counts were measured for the heart, whole liver, both left lobe and right lobes using a computer connected to a scintillation camera. We measured the effective hepatic blood flow, from the disappearance rate multiplied by the percentage of hepatic uptake as follows, (liver counts)/(tatal counts of the field) Our method of analysis automatically recorded the reappearance graph of the disappearance curve and uptake curve on the basis of the heart and the whole liver, respectively; and computed using BASIC language. This method makes it possible to obtain the image of the initial uptake of sup(99m)Tc-Sn-colloid into the liver by a small dose of it. (author)

  9. Nuclear power reactor analysis, methods, algorithms and computer programs

    International Nuclear Information System (INIS)

    Matausek, M.V

    1981-01-01

    Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)

  10. Finite element approximation of a new variational principle for compressible and incompressible linear isotropic elasticity

    International Nuclear Information System (INIS)

    Franca, L.P.; Stenberg, R.

    1989-06-01

    Stability conditions are described to analyze a variational formulation emanating from a variational principle for linear isotropic elasticity. The variational principle is based on four dependent variables (namely, the strain tensor, augmented stress, pressure and displacement) and is shown to be valid for any compressibility including the incompressible limit. An improved convergence error analysis is established for a Galerkin-least-squares method based upon these four variables. The analysis presented establishes convergence for a wide choice of combinations of finite element interpolations. (author) [pt

  11. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  12. Multiscale analysis of nonlinear systems using computational homology

    Energy Technology Data Exchange (ETDEWEB)

    Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

    2010-05-24

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure

  13. Multiscale analysis of nonlinear systems using computational homology

    Energy Technology Data Exchange (ETDEWEB)

    Konstantin Mischaikow, Rutgers University/Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

    2010-05-19

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure

  14. Markov analysis of different standby computer based systems

    International Nuclear Information System (INIS)

    Srinivas, G.; Guptan, Rajee; Mohan, Nalini; Ghadge, S.G.; Bajaj, S.S.

    2006-01-01

    As against the conventional triplicated systems of hardware and the generation of control signals for the actuator elements by means of redundant hardwired median circuits, employed in the early Indian PHWR's, a new approach of generating control signals based on software by a redundant system of computers is introduced in the advanced/current generation of Indian PHWR's. Reliability is increased by fault diagnostics and automatic switch over of all the loads to one computer in case of total failure of the other computer. Independent processing by a redundant CPU in each system enables inter-comparison to quickly identify system failure, in addition to the other self-diagnostic features provided. Combinatorial models such as reliability block diagrams and fault trees are frequently used to predict the reliability, maintainability and safety of complex systems. Unfortunately, these methods cannot accurately model dynamic system behavior; Because of its unique ability to handle dynamic cases, Markov analysis can be a powerful tool in the reliability maintainability and safety (RMS) analyses of dynamic systems. A Markov model breaks the system configuration into a number of states. Each of these states is connected to all other states by transition rates. It then utilizes transition matrices to evaluate the reliability and safety of the systems, either through matrix manipulation or other analytical solution methods, such as Laplace transforms. Thus, Markov analysis is a powerful reliability, maintainability and safety analysis tool. It allows the analyst to model complex, dynamic, highly distributed, fault tolerant systems that would otherwise be very difficult to model using classical techniques like the Fault tree method. The Dual Processor Hot Standby Process Control System (DPHS-PCS) and the Computerized Channel Temperature Monitoring System (CCTM) are typical examples of hot standby systems in the Indian PHWR's. While such systems currently in use in Indian PHWR

  15. Novel computational approaches for the analysis of cosmic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)

    2016-07-01

    In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.

  16. NALDA (Naval Aviation Logistics Data Analysis) CAI (computer aided instruction)

    Energy Technology Data Exchange (ETDEWEB)

    Handler, B.H. (Oak Ridge K-25 Site, TN (USA)); France, P.A.; Frey, S.C.; Gaubas, N.F.; Hyland, K.J.; Lindsey, A.M.; Manley, D.O. (Oak Ridge Associated Universities, Inc., TN (USA)); Hunnum, W.H. (North Carolina Univ., Chapel Hill, NC (USA)); Smith, D.L. (Memphis State Univ., TN (USA))

    1990-07-01

    Data Systems Engineering Organization (DSEO) personnel developed a prototype computer aided instruction CAI system for the Naval Aviation Logistics Data Analysis (NALDA) system. The objective of this project was to provide a CAI prototype that could be used as an enhancement to existing NALDA training. The CAI prototype project was performed in phases. The task undertaken in Phase I was to analyze the problem and the alternative solutions and to develop a set of recommendations on how best to proceed. The findings from Phase I are documented in Recommended CAI Approach for the NALDA System (Duncan et al., 1987). In Phase II, a structured design and specifications were developed, and a prototype CAI system was created. A report, NALDA CAI Prototype: Phase II Final Report, was written to record the findings and results of Phase II. NALDA CAI: Recommendations for an Advanced Instructional Model, is comprised of related papers encompassing research on computer aided instruction CAI, newly developing training technologies, instructional systems development, and an Advanced Instructional Model. These topics were selected because of their relevancy to the CAI needs of NALDA. These papers provide general background information on various aspects of CAI and give a broad overview of new technologies and their impact on the future design and development of training programs. The paper within have been index separately elsewhere.

  17. Shell stability analysis in a computer aided engineering (CAE) environment

    Science.gov (United States)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  18. Computational Analysis of the G-III Laminar Flow Glove

    Science.gov (United States)

    Malik, Mujeeb R.; Liao, Wei; Lee-Rausch, Elizabeth M.; Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan

    2011-01-01

    Under NASA's Environmentally Responsible Aviation Project, flight experiments are planned with the primary objective of demonstrating the Discrete Roughness Elements (DRE) technology for passive laminar flow control at chord Reynolds numbers relevant to transport aircraft. In this paper, we present a preliminary computational assessment of the Gulfstream-III (G-III) aircraft wing-glove designed to attain natural laminar flow for the leading-edge sweep angle of 34.6deg. Analysis for a flight Mach number of 0.75 shows that it should be possible to achieve natural laminar flow for twice the transition Reynolds number ever achieved at this sweep angle. However, the wing-glove needs to be redesigned to effectively demonstrate passive laminar flow control using DREs. As a by-product of the computational assessment, effect of surface curvature on stationary crossflow disturbances is found to be strongly stabilizing for the current design, and it is suggested that convex surface curvature could be used as a control parameter for natural laminar flow design, provided transition occurs via stationary crossflow disturbances.

  19. National survey on dose data analysis in computed tomography.

    Science.gov (United States)

    Heilmaier, Christina; Treier, Reto; Merkle, Elmar Max; Alkhadi, Hatem; Weishaupt, Dominik; Schindera, Sebastian

    2018-05-28

    A nationwide survey was performed assessing current practice of dose data analysis in computed tomography (CT). All radiological departments in Switzerland were asked to participate in the on-line survey composed of 19 questions (16 multiple choice, 3 free text). It consisted of four sections: (1) general information on the department, (2) dose data analysis, (3) use of a dose management software (DMS) and (4) radiation protection activities. In total, 152 out of 241 Swiss radiological departments filled in the whole questionnaire (return rate, 63%). Seventy-nine per cent of the departments (n = 120/152) analyse dose data on a regular basis with considerable heterogeneity in the frequency (1-2 times per year, 45%, n = 54/120; every month, 35%, n = 42/120) and method of analysis. Manual analysis is carried out by 58% (n = 70/120) compared with 42% (n = 50/120) of departments using a DMS. Purchase of a DMS is planned by 43% (n = 30/70) of the departments with manual analysis. Real-time analysis of dose data is performed by 42% (n = 21/50) of the departments with a DMS; however, residents can access the DMS in clinical routine only in 20% (n = 10/50) of the departments. An interdisciplinary dose team, which among other things communicates dose data internally (63%, n = 76/120) and externally, is already implemented in 57% (n = 68/120) departments. Swiss radiological departments are committed to radiation safety. However, there is high heterogeneity among them regarding the frequency and method of dose data analysis as well as the use of DMS and radiation protection activities. • Swiss radiological departments are committed to and interest in radiation safety as proven by a 63% return rate of the survey. • Seventy-nine per cent of departments analyse dose data on a regular basis with differences in the frequency and method of analysis: 42% use a dose management software, while 58% currently perform manual dose data analysis. Of the latter, 43% plan to buy a dose

  20. Isotropic Optical Mouse Placement for Mobile Robot Velocity Estimation

    Directory of Open Access Journals (Sweden)

    Sungbok Kim

    2014-06-01

    Full Text Available This paper presents the isotropic placement of multiple optical mice for the velocity estimation of a mobile robot. It is assumed that there can be positional restriction on the installation of optical mice at the bottom of a mobile robot. First, the velocity kinematics of a mobile robot with an array of optical mice is obtained and the resulting Jacobian matrix is analysed symbolically. Second, the isotropic, anisotropic and singular optical mouse placements are identified, along with the corresponding characteristic lengths. Third, the least squares mobile robot velocity estimation from the noisy optical mouse velocity measurements is discussed. Finally, simulation results for several different placements of three optical mice are given.

  1. Study of open systems with molecules in isotropic liquids

    Science.gov (United States)

    Kondo, Yasushi; Matsuzaki, Masayuki

    2018-05-01

    We are interested in dynamics of a system in an environment, or an open system. Such phenomena as crossover from Markovian to non-Markovian relaxation and thermal equilibration are of our interest. Open systems have experimentally been studied with ultra cold atoms, ions in traps, optics, and cold electric circuits because well-isolated systems can be prepared here and thus the effects of environments can be controlled. We point out that some molecules solved in isotropic liquid are well isolated and thus they can also be employed for studying open systems in Nuclear Magnetic Resonance (NMR) experiments. First, we provide a short review on related phenomena of open systems that helps readers to understand our motivation. We, then, present two experiments as examples of our approach with molecules in isotropic liquids. Crossover from Markovian to non-Markovian relaxation was realized in one NMR experiment, while relaxation-like phenomena were observed in approximately isolated systems in the other.

  2. Self-confinement of finite dust clusters in isotropic plasmas.

    Science.gov (United States)

    Miloshevsky, G V; Hassanein, A

    2012-05-01

    Finite two-dimensional dust clusters are systems of a small number of charged grains. The self-confinement of dust clusters in isotropic plasmas is studied using the particle-in-cell method. The energetically favorable configurations of grains in plasma are found that are due to the kinetic effects of plasma ions and electrons. The self-confinement phenomenon is attributed to the change in the plasma composition within a dust cluster resulting in grain attraction mediated by plasma ions. This is a self-consistent state of a dust cluster in which grain's repulsion is compensated by the reduced charge and floating potential on grains, overlapped ion clouds, and depleted electrons within a cluster. The common potential well is formed trapping dust clusters in the confined state. These results provide both valuable insights and a different perspective to the classical view on the formation of boundary-free dust clusters in isotropic plasmas.

  3. Computer codes for the analysis of flask impact problems

    International Nuclear Information System (INIS)

    Neilson, A.J.

    1984-09-01

    This review identifies typical features of the design of transportation flasks and considers some of the analytical tools required for the analysis of impact events. Because of the complexity of the physical problem, it is unlikely that a single code will adequately deal with all the aspects of the impact incident. Candidate codes are identified on the basis of current understanding of their strengths and limitations. It is concluded that the HONDO-II, DYNA3D AND ABAQUS codes which ar already mounted on UKAEA computers will be suitable tools for use in the analysis of experiments conducted in the proposed AEEW programme and of general flask impact problems. Initial attention should be directed at the DYNA3D and ABAQUS codes with HONDO-II being reserved for situations where the three-dimensional elements of DYNA3D may provide uneconomic simulations in planar or axisymmetric geometries. Attention is drawn to the importance of access to suitable mesh generators to create the nodal coordinate and element topology data required by these structural analysis codes. (author)

  4. Detection of Organophosphorus Pesticides with Colorimetry and Computer Image Analysis.

    Science.gov (United States)

    Li, Yanjie; Hou, Changjun; Lei, Jincan; Deng, Bo; Huang, Jing; Yang, Mei

    2016-01-01

    Organophosphorus pesticides (OPs) represent a very important class of pesticides that are widely used in agriculture because of their relatively high-performance and moderate environmental persistence, hence the sensitive and specific detection of OPs is highly significant. Based on the inhibitory effect of acetylcholinesterase (AChE) induced by inhibitors, including OPs and carbamates, a colorimetric analysis was used for detection of OPs with computer image analysis of color density in CMYK (cyan, magenta, yellow and black) color space and non-linear modeling. The results showed that there was a gradually weakened trend of yellow intensity with the increase of the concentration of dichlorvos. The quantitative analysis of dichlorvos was achieved by Artificial Neural Network (ANN) modeling, and the results showed that the established model had a good predictive ability between training sets and predictive sets. Real cabbage samples containing dichlorvos were detected by colorimetry and gas chromatography (GC), respectively. The results showed that there was no significant difference between colorimetry and GC (P > 0.05). The experiments of accuracy, precision and repeatability revealed good performance for detection of OPs. AChE can also be inhibited by carbamates, and therefore this method has potential applications in real samples for OPs and carbamates because of high selectivity and sensitivity.

  5. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whitman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-01-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of sixteen plasma traces has been processed using this technique

  6. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whiteman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-11-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of the plasma traces has been processed with this technique

  7. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    2017-07-01

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability and accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.

  8. Computer-aided target tracking in motion analysis studies

    Science.gov (United States)

    Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.

    1990-08-01

    Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.

  9. Data analysis using the Gnu R system for statistical computation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  10. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  11. Computer and Internet Addiction: Analysis and Classification of Approaches

    Directory of Open Access Journals (Sweden)

    Zaretskaya O.V.

    2017-08-01

    Full Text Available The theoretical analysis of modern research works on the problem of computer and Internet addiction is carried out. The main features of different approaches are outlined. The attempt is made to systematize researches conducted and to classify scientific approaches to the problem of Internet addiction. The author distinguishes nosological, cognitive-behavioral, socio-psychological and dialectical approaches. She justifies the need to use an approach that corresponds to the essence, goals and tasks of social psychology in the field of research as the problem of Internet addiction, and the dependent behavior in general. In the opinion of the author, this dialectical approach integrates the experience of research within the framework of the socio-psychological approach and focuses on the observed inconsistencies in the phenomenon of Internet addiction – the compensatory nature of Internet activity, when people who are interested in the Internet are in a dysfunctional life situation.

  12. Computational Fluid Dynamics Analysis of an Evaporative Cooling System

    Directory of Open Access Journals (Sweden)

    Kapilan N.

    2016-11-01

    Full Text Available The use of chlorofluorocarbon based refrigerants in the air-conditioning system increases the global warming and causes the climate change. The climate change is expected to present a number of challenges for the built environment and an evaporative cooling system is one of the simplest and environmentally friendly cooling system. The evaporative cooling system is most widely used in summer and in rural and urban areas of India for human comfort. In evaporative cooling system, the addition of water into air reduces the temperature of the air as the energy needed to evaporate the water is taken from the air. Computational fluid dynamics is a numerical analysis and was used to analyse the evaporative cooling system. The CFD results are matches with the experimental results.

  13. Web Pages Content Analysis Using Browser-Based Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Wojciech Turek

    2013-01-01

    Full Text Available Existing solutions to the problem of finding valuable information on the Websuffers from several limitations like simplified query languages, out-of-date in-formation or arbitrary results sorting. In this paper a different approach to thisproblem is described. It is based on the idea of distributed processing of Webpages content. To provide sufficient performance, the idea of browser-basedvolunteer computing is utilized, which requires the implementation of text pro-cessing algorithms in JavaScript. In this paper the architecture of Web pagescontent analysis system is presented, details concerning the implementation ofthe system and the text processing algorithms are described and test resultsare provided.

  14. TEABAGS: computer programs for instrumental neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, D J [Washington Univ., St. Louis, MO (USA); Korotev, R L [Washington Univ., St. Louis, MO (USA). McDonnell Center for the Space Sciences

    1982-01-01

    Described is a series of INAA data reduction programs collectively known as TEABAGS (Trace Element Analysis By Automated Gamma-ray Spectrometry). The programs are written in FORTRAN and run on a Nuclear Data ND-6620 computer system, but should be adaptable to any medium-sized minicomputer. They are designed to monitor the status of all spectra obtained from samples and comparison standards irradiated together and to do all pending calculations without operator intervention. Major emphasis is placed on finding all peaks in the spectrum, properly identifying all nuclides present and all contributors to each peak, determining accurate estimates of the background continua under peaks, and producing realistic uncertainties on peak areas and final abundances.

  15. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  16. Analysis of 3D crack propagation by microfocus computed tomography

    International Nuclear Information System (INIS)

    Ao Bo; Chen Fuxing; Deng Cuizhen; Zeng Yabin

    2014-01-01

    The three-point bending test of notched specimens of 2A50 forging aluminum was performed by high frequency fatigue tester, and the surface cracks of different stages were analyzed and contrasted by SEM. The crack was reconstructed by microfocus computed tomography, and its size, position and distribution were visually displayed through 3D visualization. The crack propagation behaviors were researched through gray value and position of crack front of 2D CT images in two adjacent stages, and the results show that crack propagation is irregular. The projection image of crack was obtained if crack of two stages projected onto the reference plane respectively, a significant increase of new crack propagation was observed compared with the previous projection of crack, and the distribution curve of crack front of two stages was displayed. The 3D increment distribution of the crack front propagation was obtained through the 3D crack analysis of two stages. (authors)

  17. Satellite interference analysis and simulation using personal computers

    Science.gov (United States)

    Kantak, Anil

    1988-03-01

    This report presents the complete analysis and formulas necessary to quantify the interference experienced by a generic satellite communications receiving station due to an interfering satellite. Both satellites, the desired as well as the interfering satellite, are considered to be in elliptical orbits. Formulas are developed for the satellite look angles and the satellite transmit angles generally related to the land mask of the receiving station site for both satellites. Formulas for considering Doppler effect due to the satellite motion as well as the Earth's rotation are developed. The effect of the interfering-satellite signal modulation and the Doppler effect on the power received are considered. The statistical formulation of the interference effect is presented in the form of a histogram of the interference to the desired signal power ratio. Finally, a computer program suitable for microcomputers such as IBM AT is provided with the flowchart, a sample run, results of the run, and the program code.

  18. Isotropic gates and large gamma detector arrays versus angular distributions

    International Nuclear Information System (INIS)

    Iacob, V.E.; Duchene, G.

    1997-01-01

    Angular information extracted from in-beam γ ray measurements are of great importance for γ ray multipolarity and nuclear spin assignments. In our days large Ge detector arrays became available allowing the measurements of extremely weak γ rays in almost 4π sr solid angle (e.g., EUROGAM detector array). Given the high detector efficiency it is common for the mean suppressed coincidence multiplicity to reach values as high as 4 to 6. Thus, it is possible to gate on particular γ rays in order to enhance the relative statistics of a definite reaction channel and/or a definite decaying path in the level scheme of the selected residual nucleus. As compared to angular correlations, the conditioned angular distribution spectra exhibit larger statistics because in the latter the gate-setting γ ray may be observed by all the detectors in the array, relaxing somehow the geometrical restrictions of the angular correlations. Since the in-beam γ ray emission is anisotropic one could inquire that gate setting as mentioned above, based on anisotropic γ ray which would perturb the angular distributions in the unfolded events. As our work proved, there is no reason to worry about this if the energy gate runs over the whole solid angle in an ideal 4π sr detector, i.e., if the gate is isotropic. In real quasi 4π sr detector arrays the corresponding quasi isotropic gate preserves the angular properties of the unfolded data, too. However extraction of precise angular distribution coefficient especially a 4 , requires the consideration of the deviation of the quasi isotropic gate relative to the (ideal) isotropic gate

  19. Electromagnetic illusion with isotropic and homogeneous materials through scattering manipulation

    International Nuclear Information System (INIS)

    Yang, Fan; Mei, Zhong Lei; Jiang, Wei Xiang; Cui, Tie Jun

    2015-01-01

    A new isotropic and homogeneous illusion device for electromagnetic waves is proposed. This single-shelled device can change the fingerprint of the covered object into another one by manipulating the scattering of the composite structure. We show that an electrically small sphere can be disguised as another small one with different electromagnetic parameters. The device can even make a dielectric sphere (electrically small) behave like a conducting one. Full-wave simulations confirm the performance of proposed illusion device. (paper)

  20. Liquid crystalline states of surfactant solutions of isotropic micelles

    International Nuclear Information System (INIS)

    Bagdassarian, C.; Gelbart, W.M.; Ben-Shaul, A.

    1988-01-01

    We consider micellar solutions whose surfactant molecules prefer strongly to form small, globular aggregates in the absence of intermicellar interactions. At sufficiently high volume fraction of surfactant, the isotropic phase of essentially spherical micelles is shown to be unstable with respect to an orientationally ordered (nematic) state of rodlike aggregates. This behavior is relevant to the phase diagrams reported for important classes of aqueous amphiphilic solutions

  1. Monopole-fermion systems in the complex isotropic tetrad formalism

    International Nuclear Information System (INIS)

    Gal'tsov, D.V.; Ershov, A.A.

    1988-01-01

    The interaction of fermions of arbitrary isospin with regular magnetic monopoles and dyons of the group SU(2) and also with point gravitating monopoles and dyons of the Wu-Yang type described by the Reissner-Nordstrom metric are studied using the Newman-Penrose complex isotropic tetrad formalism. Formulas for the bound-state spectrum and explicit expressions for the zero modes are obtained and the Rubakov-Callan effect for black holes is discussed

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  3. Computer-assisted sperm analysis (CASA): capabilities and potential developments.

    Science.gov (United States)

    Amann, Rupert P; Waberski, Dagmar

    2014-01-01

    Computer-assisted sperm analysis (CASA) systems have evolved over approximately 40 years, through advances in devices to capture the image from a microscope, huge increases in computational power concurrent with amazing reduction in size of computers, new computer languages, and updated/expanded software algorithms. Remarkably, basic concepts for identifying sperm and their motion patterns are little changed. Older and slower systems remain in use. Most major spermatology laboratories and semen processing facilities have a CASA system, but the extent of reliance thereon ranges widely. This review describes capabilities and limitations of present CASA technology used with boar, bull, and stallion sperm, followed by possible future developments. Each marketed system is different. Modern CASA systems can automatically view multiple fields in a shallow specimen chamber to capture strobe-like images of 500 to >2000 sperm, at 50 or 60 frames per second, in clear or complex extenders, and in information for ≥ 30 frames and provide summary data for each spermatozoon and the population. A few systems evaluate sperm morphology concurrent with motion. CASA cannot accurately predict 'fertility' that will be obtained with a semen sample or subject. However, when carefully validated, current CASA systems provide information important for quality assurance of semen planned for marketing, and for the understanding of the diversity of sperm responses to changes in the microenvironment in research. The four take-home messages from this review are: (1) animal species, extender or medium, specimen chamber, intensity of illumination, imaging hardware and software, instrument settings, technician, etc., all affect accuracy and precision of output values; (2) semen production facilities probably do not need a substantially different CASA system whereas biology laboratories would benefit from systems capable of imaging and tracking sperm in deep chambers for a flexible period of time

  4. Customizable Computer-Based Interaction Analysis for Coaching and Self-Regulation in Synchronous CSCL Systems

    Science.gov (United States)

    Lonchamp, Jacques

    2010-01-01

    Computer-based interaction analysis (IA) is an automatic process that aims at understanding a computer-mediated activity. In a CSCL system, computer-based IA can provide information directly to learners for self-assessment and regulation and to tutors for coaching support. This article proposes a customizable computer-based IA approach for a…

  5. The Isotropic Radio Background and Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Belikov, Alexander V. [Institut d' Astrophysique (France); Jeltema, Tesla E. [Univ. of California, Santa Cruz, CA (United States); Linden, Tim [Univ. of California, Santa Cruz, CA (United States); Profumo, Stefano [Univ. of California, Santa Cruz, CA (United States); Slatyer, Tracy R. [Princeton Univ., Princeton, NJ (United States)

    2012-11-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, similar to the value predicted for a simple thermal relic (sigma v ~ 3 x 10^-26 cm^3/s). We find that in any scenario in which dark matter annihilations are responsible for the observed excess radio emission, a significant fraction of the isotropic gamma ray background observed by Fermi must result from dark matter as well.

  6. Superfluid H3e in globally isotropic random media

    Science.gov (United States)

    Ikeda, Ryusuke; Aoyama, Kazushi

    2009-02-01

    Recent theoretical and experimental studies of superfluid H3e in aerogels with a global anisotropy created, e.g., by an external stress have definitely shown that the A -like phase with an equal-spin pairing in such aerogel samples is in the Anderson-Brinkman-Morel (ABM) (or axial) pairing state. In this paper, the A -like phase of superfluid H3e in globally isotropic aerogel is studied in detail by assuming a weakly disordered system in which singular topological defects are absent. Through calculation of the free energy, a disordered ABM state is found to be the best candidate of the pairing state of the globally isotropic A -like phase. Further, it is found through a one-loop renormalization-group calculation that the coreless continuous vortices (or vortex-Skyrmions) are irrelevant to the long-distance behavior of disorder-induced textures, and that the superfluidity is maintained in spite of lack of the conventional superfluid long-range order. Therefore, the globally isotropic A -like phase at weak disorder is, like in the case with a globally stretched anisotropy, a glass phase with the ABM pairing and shows superfluidity.

  7. Isotropic transmission of magnon spin information without a magnetic field.

    Science.gov (United States)

    Haldar, Arabinda; Tian, Chang; Adeyeye, Adekunle Olusola

    2017-07-01

    Spin-wave devices (SWD), which use collective excitations of electronic spins as a carrier of information, are rapidly emerging as potential candidates for post-semiconductor non-charge-based technology. Isotropic in-plane propagating coherent spin waves (magnons), which require magnetization to be out of plane, is desirable in an SWD. However, because of lack of availability of low-damping perpendicular magnetic material, a usually well-known in-plane ferrimagnet yttrium iron garnet (YIG) is used with a large out-of-plane bias magnetic field, which tends to hinder the benefits of isotropic spin waves. We experimentally demonstrate an SWD that eliminates the requirement of external magnetic field to obtain perpendicular magnetization in an otherwise in-plane ferromagnet, Ni 80 Fe 20 or permalloy (Py), a typical choice for spin-wave microconduits. Perpendicular anisotropy in Py, as established by magnetic hysteresis measurements, was induced by the exchange-coupled Co/Pd multilayer. Isotropic propagation of magnon spin information has been experimentally shown in microconduits with three channels patterned at arbitrary angles.

  8. Depth migration in transversely isotropic media with explicit operators

    Energy Technology Data Exchange (ETDEWEB)

    Uzcategui, Omar [Colorado School of Mines, Golden, CO (United States)

    1994-12-01

    The author presents and analyzes three approaches to calculating explicit two-dimensional (2D) depth-extrapolation filters for all propagation modes (P, SV, and SH) in transversely isotropic media with vertical and tilted axis of symmetry. These extrapolation filters are used to do 2D poststack depth migration, and also, just as for isotropic media, these 2D filters are used in the McClellan transformation to do poststack 3D depth migration. Furthermore, the same explicit filters can also be used to do depth-extrapolation of prestack data. The explicit filters are derived by generalizations of three different approaches: the modified Taylor series, least-squares, and minimax methods initially developed for isotropic media. The examples here show that the least-squares and minimax methods produce filters with accurate extrapolation (measured in the ability to position steep reflectors) for a wider range of propagation angles than that obtained using the modified Taylor series method. However, for low propagation angles, the modified Taylor series method has smaller amplitude and phase errors than those produced by the least-squares and minimax methods. These results suggest that to get accurate amplitude estimation, modified Taylor series filters would be somewhat preferred in areas with low dips. In areas with larger dips, the least-squares and minimax methods would give a distinctly better delineation of the subsurface structures.

  9. Lattice Boltzmann model for three-dimensional decaying homogeneous isotropic turbulence

    International Nuclear Information System (INIS)

    Xu Hui; Tao Wenquan; Zhang Yan

    2009-01-01

    We implement a lattice Boltzmann method (LBM) for decaying homogeneous isotropic turbulence based on an analogous Galerkin filter and focus on the fundamental statistical isotropic property. This regularized method is constructed based on orthogonal Hermite polynomial space. For decaying homogeneous isotropic turbulence, this regularized method can simulate the isotropic property very well. Numerical studies demonstrate that the novel regularized LBM is a promising approximation of turbulent fluid flows, which paves the way for coupling various turbulent models with LBM

  10. Probabilistic evaluations for CANTUP computer code analysis improvement

    International Nuclear Information System (INIS)

    Florea, S.; Pavelescu, M.

    2004-01-01

    Structural analysis with finite element method is today an usual way to evaluate and predict the behavior of structural assemblies subject to hard conditions in order to ensure their safety and reliability during their operation. A CANDU 600 fuel channel is an example of an assembly working in hard conditions, in which, except the corrosive and thermal aggression, long time irradiation, with implicit consequences on material properties evolution, interferes. That leads inevitably to material time-dependent properties scattering, their dynamic evolution being subject to a great degree of uncertainness. These are the reasons for developing, in association with deterministic evaluations with computer codes, the probabilistic and statistical methods in order to predict the structural component response. This work initiates the possibility to extend the deterministic thermomechanical evaluation on fuel channel components to probabilistic structural mechanics approach starting with deterministic analysis performed with CANTUP computer code which is a code developed to predict the long term mechanical behavior of the pressure tube - calandria tube assembly. To this purpose the structure of deterministic calculus CANTUP computer code has been reviewed. The code has been adapted from LAHEY 77 platform to Microsoft Developer Studio - Fortran Power Station platform. In order to perform probabilistic evaluations, it was added a part to the deterministic code which, using a subroutine from IMSL library from Microsoft Developer Studio - Fortran Power Station platform, generates pseudo-random values of a specified value. It was simulated a normal distribution around the deterministic value and 5% standard deviation for Young modulus material property in order to verify the statistical calculus of the creep behavior. The tube deflection and effective stresses were the properties subject to probabilistic evaluation. All the values of these properties obtained for all the values for

  11. The thermalization of soft modes in non-expanding isotropic quark gluon plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Blaizot, Jean-Paul, E-mail: jean-paul.blaizot@cea.fr [Institut de Physique Théorique, CNRS/UMR 3681, CEA Saclay, F-91191 Gif-sur-Yvette (France); Liao, Jinfeng [Physics Department and Center for Exploration of Energy and Matter, Indiana University, 2401 N Milo B. Sampson Lane, Bloomington, IN 47408 (United States); RIKEN BNL Research Center, Bldg. 510A, Brookhaven National Laboratory, Upton, NY 11973 (United States); Mehtar-Tani, Yacine [Institute for Nuclear Theory, University of Washington, Seattle, WA 98195-1550 (United States)

    2017-05-15

    We discuss the role of elastic and inelastic collisions and their interplay in the thermalization of the quark–gluon plasma. We consider a simplified situation of a static plasma, spatially uniform and isotropic in momentum space. We focus on the small momentum region, which equilibrates first, and on a short time scale. We obtain a simple kinetic equation that allows for an analytic description of the most important regimes. The present analysis suggests that the formation of a Bose condensate, expected when only elastic collisions are present, is strongly hindered by the inelastic, radiative, processes.

  12. Longitudinal vibration of isotropic solid rods: from classical to modern theories

    CSIR Research Space (South Africa)

    Shatalov, M

    2011-12-01

    Full Text Available Vibration of Isotropic Solid Rods: From Classical to Modern Theories Michael Shatalov1,2, Julian Marais2, Igor Fedotov2 and Michel Djouosseu Tenkam2 1Council for Scientific and Industrial Research 2Tshwane University of Technology South Africa 1...). The classical approximate theory of longitudinal vibration of rods was developed during the 18th century by J. D?Alembert, D. Bernoulli, L. Euler and J. Lagrange. This theory is based on the analysis of the one dimensional wave equation and is applicable...

  13. Summary of research in applied mathematics, numerical analysis, and computer sciences

    Science.gov (United States)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  14. Computer assisted analysis of medical x-ray images

    Science.gov (United States)

    Bengtsson, Ewert

    1996-01-01

    X-rays were originally used to expose film. The early computers did not have enough capacity to handle images with useful resolution. The rapid development of computer technology over the last few decades has, however, led to the introduction of computers into radiology. In this overview paper, the various possible roles of computers in radiology are examined. The state of the art is briefly presented, and some predictions about the future are made.

  15. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    Science.gov (United States)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  16. Vibrational Averaging of the Isotropic Hyperfine Coupling Constants for the Methyl Radical

    Science.gov (United States)

    Adam, Ahmad; Jensen, Per; Yachmenev, Andrey; Yurchenko, Sergei N.

    2014-06-01

    Electronic contributions to molecular properties are often considered as the major factor and usually reported in the literature without ro-vibrational corrections. However, there are many cases where the nuclear motion contributions are significant and even larger than the electronic contribution. In order to obtain accurate theoretical predictions, nuclear motion effects on molecular properties need to be taken into account. The computed isotropic hyperfine coupling constants for the nonvibrating methyl radical CH_3 are far from the experimental values. For CH_3, we have calculated the vibrational-state-dependence of the isotropic hyperfine coupling constant in the electronic ground state. The vibrational wavefunctions used in the averaging procedure were obtained variationally with the TROVE program. Analytical representations for the potential energy surfaces and the hyperfine coupling constant surfaces are obtained in least-squares fitting procedures. Thermal averaging has been carried out for molecules in thermal equilibrium, i.e., with Boltzmann-distributed populations. The calculation methods and the results will be discussed in detail.

  17. Integrated severe accident containment analysis with the CONTAIN computer code

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Williams, D.C.; Rexroth, P.E.; Tills, J.L.

    1985-12-01

    Analysis of physical and radiological conditions iunside the containment building during a severe (core-melt) nuclear reactor accident requires quantitative evaluation of numerous highly disparate yet coupled phenomenologies. These include two-phase thermodynamics and thermal-hydraulics, aerosol physics, fission product phenomena, core-concrete interactions, the formation and combustion of flammable gases, and performance of engineered safety features. In the past, this complexity has meant that a complete containment analysis would require application of suites of separate computer codes each of which would treat only a narrower subset of these phenomena, e.g., a thermal-hydraulics code, an aerosol code, a core-concrete interaction code, etc. In this paper, we describe the development and some recent applications of the CONTAIN code, which offers an integrated treatment of the dominant containment phenomena and the interactions among them. We describe the results of a series of containment phenomenology studies, based upon realistic accident sequence analyses in actual plants. These calculations highlight various phenomenological effects that have potentially important implications for source term and/or containment loading issues, and which are difficult or impossible to treat using a less integrated code suite

  18. Reliability of Computer Analysis of Electrocardiograms (ECG) of ...

    African Journals Online (AJOL)

    Background: Computer programmes have been introduced to electrocardiography (ECG) with most physicians in Africa depending on computer interpretation of ECG. This study was undertaken to evaluate the reliability of computer interpretation of the 12-Lead ECG in the Black race. Methodology: Using the SCHILLER ...

  19. Comparison between isotropic linear-elastic law and isotropic hyperelastic law in the finite element modeling of the brachial plexus.

    Science.gov (United States)

    Perruisseau-Carrier, A; Bahlouli, N; Bierry, G; Vernet, P; Facca, S; Liverneaux, P

    2017-12-01

    Augmented reality could help the identification of nerve structures in brachial plexus surgery. The goal of this study was to determine which law of mechanical behavior was more adapted by comparing the results of Hooke's isotropic linear elastic law to those of Ogden's isotropic hyperelastic law, applied to a biomechanical model of the brachial plexus. A model of finite elements was created using the ABAQUS ® from a 3D model of the brachial plexus acquired by segmentation and meshing of MRI images at 0°, 45° and 135° of shoulder abduction of a healthy subject. The offset between the reconstructed model and the deformed model was evaluated quantitatively by the Hausdorff distance and qualitatively by the identification of 3 anatomical landmarks. In every case the Hausdorff distance was shorter with Ogden's law compared to Hooke's law. On a qualitative aspect, the model deformed by Ogden's law followed the concavity of the reconstructed model whereas the model deformed by Hooke's law remained convex. In conclusion, the results of this study demonstrate that the behavior of Ogden's isotropic hyperelastic mechanical model was more adapted to the modeling of the deformations of the brachial plexus. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. RADTRAN 5 - A computer code for transportation risk analysis

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Kanipe, F.L.

    1993-01-01

    The RADTRAN 5 computer code has been developed to estimate radiological and nonradiological risks of radioactive materials transportation. RADTRAN 5 is written in ANSI standard FORTRAN 77; the code contains significant advances in the methodology first pioneered with the LINK option of RADTRAN 4. A major application of the LINK methodology is route-specific analysis. Another application is comparisons of attributes along the same route segments. Nonradiological risk factors have been incorporated to allow users to estimate nonradiological fatalities and injuries that might occur during the transportation event(s) being analyzed. These fatalities include prompt accidental fatalities from mechanical causes. Values of these risk factors for the United States have been made available in the code as optional defaults. Several new health effects models have been published in the wake of the Hiroshima-Nagasaki dosimetry reassessment, and this has emphasized the need for flexibility in the RADTRAN approach to health-effects calculations. Therefore, the basic set of health-effects conversion equations in RADTRAN have been made user-definable. All parameter values can be changed by the user, but a complete set of default values are available for both the new International Commission on Radiation Protection model (ICRP Publication 60) and the recent model of the U.S. National Research Council's Committee on the Biological Effects of Radiation (BEIR V). The meteorological input data tables have been modified to permit optional entry of maximum downwind distances for each dose isopleth. The expected dose to an individual in each isodose area is also calculated and printed automatically. Examples are given that illustrate the power and flexibility of the RADTRAN 5 computer code. (J.P.N.)

  1. Genome Assembly and Computational Analysis Pipelines for Bacterial Pathogens

    KAUST Repository

    Rangkuti, Farania Gama Ardhina

    2011-06-01

    Pathogens lie behind the deadliest pandemics in history. To date, AIDS pandemic has resulted in more than 25 million fatal cases, while tuberculosis and malaria annually claim more than 2 million lives. Comparative genomic analyses are needed to gain insights into the molecular mechanisms of pathogens, but the abundance of biological data dictates that such studies cannot be performed without the assistance of computational approaches. This explains the significant need for computational pipelines for genome assembly and analyses. The aim of this research is to develop such pipelines. This work utilizes various bioinformatics approaches to analyze the high-­throughput genomic sequence data that has been obtained from several strains of bacterial pathogens. A pipeline has been compiled for quality control for sequencing and assembly, and several protocols have been developed to detect contaminations. Visualization has been generated of genomic data in various formats, in addition to alignment, homology detection and sequence variant detection. We have also implemented a metaheuristic algorithm that significantly improves bacterial genome assemblies compared to other known methods. Experiments on Mycobacterium tuberculosis H37Rv data showed that our method resulted in improvement of N50 value of up to 9697% while consistently maintaining high accuracy, covering around 98% of the published reference genome. Other improvement efforts were also implemented, consisting of iterative local assemblies and iterative correction of contiguated bases. Our result expedites the genomic analysis of virulent genes up to single base pair resolution. It is also applicable to virtually every pathogenic microorganism, propelling further research in the control of and protection from pathogen-­associated diseases.

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  3. Frequency Domain Computer Programs for Prediction and Analysis of Rail Vehicle Dynamics : Volume 1. Technical Report

    Science.gov (United States)

    1975-12-01

    Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume I defines the general analytical capabilities required for computer programs applicable to single rail vehi...

  4. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  5. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  6. COMTA - a computer code for fuel mechanical and thermal analysis

    International Nuclear Information System (INIS)

    Basu, S.; Sawhney, S.S.; Anand, A.K.; Anantharaman, K.; Mehta, S.K.

    1979-01-01

    COMTA is a generalized computer code for integrity analysis of the free standing fuel cladding, with natural UO 2 or mixed oxide fuel pellets. Thermal and Mechanical analysis is done simultaneously for any power history of the fuel pin. For analysis, the fuel cladding is assumed to be axisymmetric and is subjected to axisymmetric load due to contact pressure, gas pressure, coolant pressure and thermal loads. Axial variation of load is neglected and creep and plasticity are assumed to occur at constant volume. The pellet is assumed to be made of concentric annuli. The fission gas release integral is dependent on the temperature and the power produced in each annulus. To calculate the temperature distribution in the fuel pin, the variation of bulk coolant temperature is given as an input to the code. Gap conductance is calculated at every time step, considering fuel densification, fuel relocation and gap closure, filler gas dilution by released fission gas, gap closure by expansion and irradiation swelling. Overall gap conductance is contributed by heat transfer due to the three modes; conduction convection and radiation as per modified Ross and Stoute model. Equilibrium equations, compatibility equations, stress strain relationships (including thermal strains and permanent strains due to creep and plasticity) are used to obtain triaxial stresses and strains. Thermal strain is assumed to be zero at hot zero power conditions. The boundary conditions are obtained for radial stresses at outside and inside surfaces by making these equal to coolant pressure and internal pressure respectively. A multi-mechanism creep model which accounts for thermal and irradiation creep is used to calculate the overall creep rate. Effective plastic strain is a function of effective stress and material constants. (orig.)

  7. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    Science.gov (United States)

    Bonham, Kevin S; Stefan, Melanie I

    2017-10-01

    While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  8. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    Directory of Open Access Journals (Sweden)

    Kevin S Bonham

    2017-10-01

    Full Text Available While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  9. Trident: scalable compute archives: workflows, visualization, and analysis

    Science.gov (United States)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  10. Computer analysis and comparison of chess players' game-playing styles

    OpenAIRE

    Krevs, Urša

    2015-01-01

    Today's computer chess programs are very good at evaluating chess positions. Research has shown that we can rank chess players by the quality of their game play, using a computer chess program. In the master's thesis Computer analysis and comparison of chess players' game-playing styles, we focus on the content analysis of chess games using a computer chess program's evaluation and attributes we determined for each individual position. We defined meaningful attributes that can be used for com...

  11. Intra-connected three-dimensionally isotropic bulk negative index photonic metamaterial

    International Nuclear Information System (INIS)

    Guney, Durdu; Koschny, Thomas; Soukoulis, Costas

    2010-01-01

    Isotropic negative index metamaterials (NIMs) are highly desired, particularly for the realization of ultra-high resolution lenses. However, existing isotropic NIMs function only two-dimensionally and cannot be miniaturized beyond microwaves. Direct laser writing processes can be a paradigm shift toward the fabrication of three-dimensionally (3D) isotropic bulk optical metamaterials, but only at the expense of an additional design constraint, namely connectivity. Here, we demonstrate with a proof-of-principle design that the requirement connectivity does not preclude fully isotropic left-handed behavior. This is an important step towards the realization of bulk 3D isotropic NIMs at optical wavelengths.

  12. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  13. A performance analysis of EC2 cloud computing services for scientific computing

    NARCIS (Netherlands)

    Ostermann, S.; Iosup, A.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.; Avresky, D.; Diaz, M.; Bode, A.; Bruno, C.; Dekel, E.

    2010-01-01

    Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds

  14. On a hierarchical construction of the anisotropic LTSN solution from the isotropic LTSN solution

    International Nuclear Information System (INIS)

    Foletto, Taline; Segatto, Cynthia F.; Bodmann, Bardo E.; Vilhena, Marco T.

    2015-01-01

    In this work, we present a recursive scheme targeting the hierarchical construction of anisotropic LTS N solution from the isotropic LTS N solution. The main idea relies in the decomposition of the associated LTS N anisotropic matrix as a sum of two matrices in which one matrix contains the isotropic and the other anisotropic part of the problem. The matrix containing the anisotropic part is considered as the source of the isotropic problem. The solution of this problem is made by the decomposition of the angular flux as a truncated series of intermediate functions and replace in the isotropic equation. After the replacement of these into the split isotropic equation, we construct a set of isotropic recursive problems, that are readily solved by the classic LTS N isotropic method. We apply this methodology to solve problems considering homogeneous and heterogeneous anisotropic regions. Numerical results are presented and compared with the classical LTS N anisotropic solution. (author)

  15. Gold-standard for computer-assisted morphological sperm analysis.

    Science.gov (United States)

    Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen

    2017-04-01

    Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm

  16. Computational analysis on plug-in hybrid electric motorcycle chassis

    Science.gov (United States)

    Teoh, S. J.; Bakar, R. A.; Gan, L. M.

    2013-12-01

    Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.

  17. Automatic analysis of gamma spectra using a desk computer

    International Nuclear Information System (INIS)

    Rocca, H.C.

    1976-10-01

    A code for the analysis of gamma spectra obtained with a Ge(Li) detector was developed for use with a desk computer (Hewlett-Packard Model 9810 A). The process is performed in a totally automatic way, data are conveniently smoothed and the background is generated by a convolutive equation. A calibration of the equipment with well-known standard sources gives the necessary data for adjusting a third degree equation by minimun squares, relating the energy with the peak position. Criteria are given for determining if certain groups of values constitute or not a peak or if it is a double line. All the peaks are adjusted to a gaussian curve and if necessary decomposed in their components. Data entry is by punched tape, ASCII Code. An alf-numeric printer provides (a) the position of the peak and its energy, (b) its resolution if it is larger than expected, (c) the area of the peak with its statistic error determined by the method of Wasson. As option, the complete spectra with the determined background can be plotted. (author) [es

  18. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  19. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  20. COMPUTATIONAL ANALYSIS OF BACKWARD-FACING STEP FLOW

    Directory of Open Access Journals (Sweden)

    Erhan PULAT

    2001-01-01

    Full Text Available In this study, backward-facing step flow that are encountered in electronic systems cooling, heat exchanger design, and gas turbine cooling are investigated computationally. Steady, incompressible, and two-dimensional air flow is analyzed. Inlet velocity is assumed uniform and it is obtained from parabolic profile by using maximum velocity. In the analysis, the effects of channel expansion ratio and Reynolds number to reattachment length are investigated. In addition, pressure distribution throughout the channel length is also obtained and flow is analyzed for the Reynolds number values of 50 and 150 and channel expansion ratios of 1.5 and 2. Governing equations are solved by using Galerkin finite element mothod of ANSYS-FLOTRAN code. Obtained results are compared with the solutions of lattice BGK method that is relatively new method in fluid dynamics and other numerical and experimental results. It is concluded that reattachment length increases with increasing Reynolds number and at the same Reynolds number it decreases with increasing channel expansion ratio.